首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article addresses the intersection between perceptual estimates of head motion based on purely vestibular and purely visual sensation, by considering how nonvisual (e.g. vestibular and proprioceptive) sensory signals for head and eye motion can be combined with visual signals available from a single landmark to generate a complete perception of self-motion. In order to do this, mathematical dimensions of sensory signals and perceptual parameterizations of self-motion are evaluated, and equations for the sensory-to-perceptual transition are derived. With constant velocity translation and vision of a single point, it is shown that visual sensation allows only for the externalization, to the frame of reference given by the landmark, of an inertial self-motion estimate from nonvisual signals. However, it is also shown that, with nonzero translational acceleration, use of simple visual signals provides a biologically plausible strategy for integration of inertial acceleration sensation, to recover translational velocity. A dimension argument proves similar results for horizontal flow of any number of discrete visible points. The results provide insight into the convergence of visual and vestibular sensory signals for self-motion and indicate perceptual algorithms by which primitive visual and vestibular signals may be integrated for self-motion perception.  相似文献   

2.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

3.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

4.
Our inner ear is equipped with a set of linear accelerometers, the otolith organs, that sense the inertial accelerations experienced during self-motion. However, as Einstein pointed out nearly a century ago, this signal would by itself be insufficient to detect our real movement, because gravity, another form of linear acceleration, and self-motion are sensed identically by otolith afferents. To deal with this ambiguity, it was proposed that neural populations in the pons and midline cerebellum compute an independent, internal estimate of gravity using signals arising from the vestibular rotation sensors, the semicircular canals. This hypothesis, regarding a causal relationship between firing rates and postulated sensory contributions to inertial motion estimation, has been directly tested here by recording neural activities before and after inactivation of the semicircular canals. We show that, unlike cells in normal animals, the gravity component of neural responses was nearly absent in canal-inactivated animals. We conclude that, through integration of temporally matched, multimodal information, neurons derive the mathematical signals predicted by the equations describing the physics of the outside world.  相似文献   

5.
The ability to orient and navigate through the terrestrial environment represents a computational challenge common to all vertebrates. It arises because motion sensors in the inner ear, the otolith organs, and the semicircular canals transduce self-motion in an egocentric reference frame. As a result, vestibular afferent information reaching the brain is inappropriate for coding our own motion and orientation relative to the outside world. Here we show that cerebellar cortical neuron activity in vermal lobules 9 and 10 reflects the critical computations of transforming head-centered vestibular afferent information into earth-referenced self-motion and spatial orientation signals. Unlike vestibular and deep cerebellar nuclei neurons, where a mixture of responses was observed, Purkinje cells represent a homogeneous population that encodes inertial motion. They carry the earth-horizontal component of a spatially transformed and temporally integrated rotation signal from the semicircular canals, which is critical for computing head attitude, thus isolating inertial linear accelerations during navigation.  相似文献   

6.
When small flying insects go off their intended course, they use the resulting pattern of motion on their eye, or optic flow, to guide corrective steering. A change in heading generates a unique, rotational motion pattern and a change in position generates a translational motion pattern, and each produces corrective responses in the wingbeats. Any image in the flow field can signal rotation, but owing to parallax, only the images of nearby objects can signal translation. Insects that fly near the ground might therefore respond more strongly to translational optic flow that occurs beneath them, as the nearby ground will produce strong optic flow. In these experiments, rigidly tethered fruitflies steered in response to computer-generated flow fields. When correcting for unintended rotations, flies weight the motion in their upper and lower visual fields equally. However, when correcting for unintended translations, flies weight the motion in the lower visual fields more strongly. These results are consistent with the interpretation that fruitflies stabilize by attending to visual areas likely to contain the strongest signals during natural flight conditions.  相似文献   

7.
For optimal visual control of compensatory eye movements during locomotion it is necessary to distinguish the rotational and translational components of the optic flow field. Optokinetic eye movements can reduce the rotational component only, making the information contained in the translational flow readily available to the animal. We investigated optokinetic eye rotation in the marble rock crab, Pachygrapsus marmoratus, during translational movement, either by displacing the animal or its visual surroundings. Any eye movement in response to such stimuli is taken as an indication that the system is unable to separate the translational and the rotational components in the optic flow in a mathematically perfect way. When the crabs are translated within a pseudo-natural environment, eye movements are negligible, especially during sideways translation. When, however, crabs were placed in a gangway between two elongated rectangular sidewalls carrying dotted patterns which were translated back and forth, marked eye movements were elicited, depending on the translational velocity. To resolve this discrepancy, we tested several hypotheses about mechanisms using detailed analysis of the optic flow or whole-field integration. We found that the latter are sufficient to explain the efficient separation of translation and rotation of crabs in quasi-natural situations. Accepted: 6 May 1997  相似文献   

8.
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.  相似文献   

9.
 This article describes a computational model for the sensory perception of self-motion, considered as a compromise between sensory information and physical coherence constraints. This compromise is realized by a dynamic optimization process minimizing a set of cost functions. Measure constraints are expressed as quadratic errors between motion estimates and corresponding sensory signals, using internal models of sensor transfer functions. Coherence constraints are expressed as quadratic errors between motion estimates, and their prediction is based on internal models of the physical laws governing the corresponding physical stimuli. This general scheme leads to a straightforward representation of fundamental sensory interactions (fusion of visual and canal rotational inputs, identification of the gravity component from the otolithic input, otolithic contribution to the perception of rotations, and influence of vection on the subjective vertical). The model is tuned and assessed using a range of well-known psychophysical results, including off-vertical axis rotations and centrifuge experiments. The ability of the model to predict and help analyze new situations is illustrated by a study of the vestibular contributions to self-motion perception during automobile driving and during acceleration cueing in driving simulators. The extendable structure of the model allows for further developments and applications, by using other cost functions representing additional sensory interactions. Received: 10 October 2000 / Accepted in revised form: 12 August 2002 Acknowledgements. This research was performed within the framework of a CIFRE grant (ANRT contract #331/97) for the doctoral work of G. Reymond at RENAULT and LPPA. The authors wish to thank the anonymous reviewers and Prof. H. Mittelstaedt for their valuable suggestions. Correspondence to: G. Reymond (e-mail: gilles.reymond@renault.com, Tel.: +33-1-34952170, Fax: +33-1-34952730)  相似文献   

10.
Although it is well established that the neural code representing the world changes at each stage of a sensory pathway, the transformations that mediate these changes are not well understood. Here we show that self-motion (i.e. vestibular) sensory information encoded by VIIIth nerve afferents is integrated nonlinearly by post-synaptic central vestibular neurons. This response nonlinearity was characterized by a strong (~50%) attenuation in neuronal sensitivity to low frequency stimuli when presented concurrently with high frequency stimuli. Using computational methods, we further demonstrate that a static boosting nonlinearity in the input-output relationship of central vestibular neurons accounts for this unexpected result. Specifically, when low and high frequency stimuli are presented concurrently, this boosting nonlinearity causes an intensity-dependent bias in the output firing rate, thereby attenuating neuronal sensitivities. We suggest that nonlinear integration of afferent input extends the coding range of central vestibular neurons and enables them to better extract the high frequency features of self-motion when embedded with low frequency motion during natural movements. These findings challenge the traditional notion that the vestibular system uses a linear rate code to transmit information and have important consequences for understanding how the representation of sensory information changes across sensory pathways.  相似文献   

11.
 The sensory weighting model is a general model of sensory integration that consists of three processing layers. First, each sensor provides the central nervous system (CNS) with information regarding a specific physical variable. Due to sensor dynamics, this measure is only reliable for the frequency range over which the sensor is accurate. Therefore, we hypothesize that the CNS improves on the reliability of the individual sensor outside this frequency range by using information from other sensors, a process referred to as “frequency completion.” Frequency completion uses internal models of sensory dynamics. This “improved” sensory signal is designated as the “sensory estimate” of the physical variable. Second, before being combined, information with different physical meanings is first transformed into a common representation; sensory estimates are converted to intermediate estimates. This conversion uses internal models of body dynamics and physical relationships. Third, several sensory systems may provide information about the same physical variable (e.g., semicircular canals and vision both measure self-rotation). Therefore, we hypothesize that the “central estimate” of a physical variable is computed as a weighted sum of all available intermediate estimates of this physical variable, a process referred to as “multicue weighted averaging.” The resulting central estimate is fed back to the first two layers. The sensory weighting model is applied to three-dimensional (3D) visual–vestibular interactions and their associated eye movements and perceptual responses. The model inputs are 3D angular and translational stimuli. The sensory inputs are the 3D sensory signals coming from the semicircular canals, otolith organs, and the visual system. The angular and translational components of visual movement are assumed to be available as separate stimuli measured by the visual system using retinal slip and image deformation. In addition, both tonic (“regular”) and phasic (“irregular”) otolithic afferents are implemented. Whereas neither tonic nor phasic otolithic afferents distinguish gravity from linear acceleration, the model uses tonic afferents to estimate gravity and phasic afferents to estimate linear acceleration. The model outputs are the internal estimates of physical motion variables and 3D slow-phase eye movements. The model also includes a smooth pursuit module. The model matches eye responses and perceptual effects measured during various motion paradigms in darkness (e.g., centered and eccentric yaw rotation about an earth-vertical axis, yaw rotation about an earth-horizontal axis) and with visual cues (e.g., stabilized visual stimulation or optokinetic stimulation). Received: 20 September 2000 / Accepted in revised form: 28 September 2001  相似文献   

12.
Fast moving animals depend on cues derived from the optic flow on their retina. Optic flow from translational locomotion includes information about the three-dimensional composition of the environment, while optic flow experienced during a rotational self motion does not. Thus, a saccadic gaze strategy that segregates rotations from translational movements during locomotion will facilitate extraction of spatial information from the visual input. We analysed whether birds use such a strategy by highspeed video recording zebra finches from two directions during an obstacle avoidance task. Each frame of the recording was examined to derive position and orientation of the beak in three-dimensional space. The data show that in all flights the head orientation was shifted in a saccadic fashion and was kept straight between saccades. Therefore, birds use a gaze strategy that actively stabilizes their gaze during translation to simplify optic flow based navigation. This is the first evidence of birds actively optimizing optic flow during flight.  相似文献   

13.
How is binocular motion information integrated in the bilateral network of wide-field motion-sensitive neurons, called lobula plate tangential cells (LPTCs), in the visual system of flies? It is possible to construct an accurate model of this network because a complete picture of synaptic interactions has been experimentally identified. We investigated the cooperative behavior of the network of horizontal LPTCs underlying the integration of binocular motion information and the information representation in the bilateral LPTC network through numerical simulations on the network model. First, we qualitatively reproduced rotational motion-sensitive response of the H2 cell previously reported in vivo experiments and ascertained that it could be accounted for by the cooperative behavior of the bilateral network mainly via interhemispheric electrical coupling. We demonstrated that the response properties of single H1 and Hu cells, unlike H2 cells, are not influenced by motion stimuli in the contralateral visual hemi-field, but that the correlations between these cell activities are enhanced by the rotational motion stimulus. We next examined the whole population activity by performing principal component analysis (PCA) on the population activities of simulated LPTCs. We showed that the two orthogonal patterns of correlated population activities given by the first two principal components represent the rotational and translational motions, respectively, and similar to the H2 cell, rotational motion produces a stronger response in the network than does translational motion. Furthermore, we found that these population-coding properties are strongly influenced by the interhemispheric electrical coupling. Finally, to test the generality of our conclusions, we used a more simplified model and verified that the numerical results are not specific to the network model we constructed.  相似文献   

14.
Responses of multisensory neurons to combinations of sensory cues are generally enhanced or depressed relative to single cues presented alone, but the rules that govern these interactions have remained unclear. We examined integration of visual and vestibular self-motion cues in macaque area MSTd in response to unimodal as well as congruent and conflicting bimodal stimuli in order to evaluate hypothetical combination rules employed by multisensory neurons. Bimodal responses were well fit by weighted linear sums of unimodal responses, with weights typically less than one (subadditive). Surprisingly, our results indicate that weights change with the relative reliabilities of the two cues: visual weights decrease and vestibular weights increase when visual stimuli are degraded. Moreover, both modulation depth and neuronal discrimination thresholds improve for matched bimodal compared to unimodal stimuli, which might allow for increased neural sensitivity during multisensory stimulation. These findings establish important new constraints for neural models of cue integration.  相似文献   

15.
Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation behavior of insects.  相似文献   

16.
Self-localization requires that information from several sensory modalities and knowledge domains be integrated in order to identify an environment and determine current location and heading. This integration occurs by the convergence of highly processed sensory information onto neural systems in entorhinal cortex and hippocampus. Entorhinal neurons combine angular and linear self-motion information to generate an oriented metric signal that is then 'attached' to each environment using information about landmarks and context. Neurons in hippocampus use this signal to determine the animal's unique position within a particular environment. Elucidating this process illuminates not only spatial processing but also, more generally, how the brain builds knowledge representations from inputs carrying heterogeneous sensory and semantic content.  相似文献   

17.
Results have been obtained on the quasi-elastic spectra of neutrons scattered from pure water, a 20% agarose gel (hydration four grams H2O per gram of dry solid) and cysts of the brine shrimp Artemia for hydrations between 0.10 and 1.2 grams H2O per gram of dry solids. The spectra were interpreted using a two-component model that included contributions from the covalently bonded protons and the hydration water, and a mobile water fraction. The mobile fraction was described by a jump-diffusion correlation function for the translation motion and a simple diffusive orientational correlation function. The results for the line widths gamma (Q2) for pure water were in good agreement with previous measurements. The agarose results were consistent with NMR measurements that show a slightly reduced translational diffusion for the mobile water fraction. The Artemia results show that the translational diffusion coefficient of the mobile water fraction was greatly reduced from that of pure water. The line width was determined mainly by the rotational motion, which was also substantially reduced from the pure water value as determined from dielectric relaxation studies. The translational and rotational diffusion parameters were consistent with the NMR measurements of diffusion and relaxation. Values for the hydration fraction and the mean square thermal displacement [u2] as determined from the Q-dependence of the line areas were also obtained.  相似文献   

18.
19.
The neural representation of motion aftereffects induced by various visual flows (translational, rotational, motion-in-depth, and translational transparent flows) was studied under the hypothesis that the imbalances in discharge activities would occur in favor in the direction opposite to the adapting stimulation in the monkey MST cells (cells in the medial superior temporal area) which can discriminate the mode (i.e., translational, rotational, or motion-in-depth) of the given flow. In single-unit recording experiments conducted on anaesthetized monkeys, we found that the rate of spontaneous discharge and the sensitivity to a test stimulus moving in the preferred direction decreased after receiving an adapting stimulation moving in the preferred direction, whereas they increased after receiving an adapting stimulation moving in the null direction. To consistently explain the bidirectional perception of a transparent visual flow and its unidirectional motion aftereffect by the same hypothesis, we need to assume the existence of two subtypes of MST D cells which show directionally selective responses to a translational flow: component cells and integration cells. Our physiological investigation revealed that the MST D cells could be divided into two types: one responded to a transparent flow by two peaks at the instances when the direction of one of the component flow matched the preferred direction of the cell, and the other responded by a single peak at the instance when the direction of the integrated motion matched the preferred direction. In psychophysical experiments on human subjects, we found evidence for the existence of component and integration representations in the human brain. To explain the different motion perceptions, i.e., two transparent flows during presentation of the flows and a single flow in the opposite direction to the integrated flows after stopping the flow stimuli, we suggest that the pattern-discrimination system can select the motion representation that is consistent with the perception of the pattern from two motion representations. We discuss the computational aspects related to the integration of component motion fields.  相似文献   

20.
Mechanisms and implications of animal flight maneuverability   总被引:1,自引:0,他引:1  
Accelerations and directional changes of flying animals derivefrom interactions between aerodynamic force production and theinertial resistance of the body to translation and rotation.Anatomical and allometric features of body design thus mediatethe rapidity of aerial maneuvers. Both translational and rotationalresponsiveness of the body to applied force decrease with increasedtotal mass. For flying vertebrates, contributions of the relativelyheavy wings to whole-body rotational inertia are substantial,whereas the relatively light wings of many insect taxa suggestthat rotational inertia is dominated by the contributions ofbody segments. In some circumstances, inertial features of wingdesign may be as significant as are their aerodynamic propertiesin influencing the rapidity of body rotations. Stability inflight requires force and moment balances that are usually attainedvia bilateral symmetry in wingbeat kinematics, whereas bodyroll and yaw derive from bilaterally asymmetric movements ofboth axial and appendicular structures. In many flying vertebrates,use of the tail facilitates the generation of aerodynamic torquesand substantially enhances quickness of body rotation. Geometricalconstraints on wingbeat kinematics may limit total force productionand thus accelerational capacity in certain behavioral circumstances.Unitary limits to animal flight performance and maneuverabilityare unlikely, however, given varied and context-specific interactionsamong anatomical, biomechanical, and energetic features of design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号