首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.  相似文献   

2.
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1–2 days. During dynamic linear acceleration (0.15–0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore–aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions.  相似文献   

3.
Vection is an illusory perception of self-motion that can occur when visual motion fills the majority of the visual field. This study examines the effect of the duration of visual field movement (VFM) on the perceived strength of self-motion using an inertial nulling (IN) and a magnitude estimation technique based on the certainty that motion occurred (certainty estimation, CE). These techniques were then used to investigate the association between migraine diagnosis and the strength of perceived vection. Visual star-field stimuli consistent with either looming or receding motion were presented for 1, 4, 8 or 16s. Subjects reported the perceived direction of self-motion during the final 1s of the stimulus. For the IN method, an inertial nulling motion was delivered during this final 1s of the visual stimulus, and subjects reported the direction of perceived self-motion during this final second. The magnitude of inertial motion was varied adaptively to determine the point of subjective equality (PSE) at which forward or backward responses were equally likely. For the CE trials the same range of VFM was used but without inertial motion and subjects rated their certainty of motion on a scale of 0–100. PSE determined with the IN technique depended on direction and duration of visual motion and the CE technique showed greater certainty of perceived vection with longer VFM duration. A strong correlation between CE and IN techniques was present for the 8s stimulus. There was appreciable between-subject variation in both CE and IN techniques and migraine was associated with significantly increased perception of self-motion by CE and IN at 8 and 16s. Together, these results suggest that vection may be measured by both CE and IN techniques with good correlation. The results also suggest that susceptibility to vection may be higher in subjects with a history of migraine.  相似文献   

4.
5.
This article addresses the intersection between perceptual estimates of head motion based on purely vestibular and purely visual sensation, by considering how nonvisual (e.g. vestibular and proprioceptive) sensory signals for head and eye motion can be combined with visual signals available from a single landmark to generate a complete perception of self-motion. In order to do this, mathematical dimensions of sensory signals and perceptual parameterizations of self-motion are evaluated, and equations for the sensory-to-perceptual transition are derived. With constant velocity translation and vision of a single point, it is shown that visual sensation allows only for the externalization, to the frame of reference given by the landmark, of an inertial self-motion estimate from nonvisual signals. However, it is also shown that, with nonzero translational acceleration, use of simple visual signals provides a biologically plausible strategy for integration of inertial acceleration sensation, to recover translational velocity. A dimension argument proves similar results for horizontal flow of any number of discrete visible points. The results provide insight into the convergence of visual and vestibular sensory signals for self-motion and indicate perceptual algorithms by which primitive visual and vestibular signals may be integrated for self-motion perception.  相似文献   

6.
It is still an enigma how human subjects combine visual and vestibular inputs for their self-motion perception. Visual cues have the benefit of high spatial resolution but entail the danger of self motion illusions. We performed psychophysical experiments (verbal estimates as well as pointer indications of perceived self-motion in space) in normal subjects (Ns) and patients with loss of vestibular function (Ps). Subjects were presented with horizontal sinusoidal rotations of an optokinetic pattern (OKP) alone (visual stimulus; 0.025-3.2 Hz; displacement amplitude, 8 degrees) or in combinations with rotations of a Bárány chair (vestibular stimulus; 0.025-0.4 Hz; +/- 8 degrees). We found that specific instructions to the subjects created different perceptual states in which their self-motion perception essentially reflected three processing steps during pure visual stimulation: i) When Ns were primed by a procedure based on induced motion and then they estimated perceived self-rotation upon pure optokinetic stimulation (circular vection, CV), the CV has a gain close to unity up to frequencies of almost 0.8 Hz, followed by a sharp decrease at higher frequencies (i.e., characteristics resembling those of the optokinetic reflex, OKR, and of smooth pursuit, SP). ii) When Ns were instructed to "stare through" the optokinetic pattern, CV was absent at high frequency, but increasingly developed as frequency was decreased below 0.1 Hz. iii) When Ns "looked at" the optokinetic pattern (accurately tracked it with their eyes) CV was usually absent, even at low frequency. CV in Ps showed similar dynamics as in Ns in condition i), independently of the instruction. During vestibular stimulation, self-motion perception in Ns fell from a maximum at 0.4 Hz to zero at 0.025 Hz. When vestibular stimulation was combined with visual stimulation while Ns "stared through" OKP, perception at low frequencies became modulated in magnitude. When Ns "looked" at OKP, this modulation was reduced, apart from the synergistic stimulus combination (OKP stationary) where magnitude was similar as during "staring". The obtained gain and phase curves of the perception were incompatible with linear systems prediction. We therefore describe the present findings by a non-linear dynamic model in which the visual input is processed in three steps: i) It shows dynamics similar to those of OKR and SP; ii) it is shaped to complement the vestibular dynamics and is fused with a vestibular signal by linear summation; and iii) it can be suppressed by a visual-vestibular conflict mechanism when the visual scene is moving in space. Finally, an important element of the model is a velocity threshold of about 1.2 degrees/s which is instrumental in maintaining perceptual stability and in explaining the observed dynamics of perception. We conclude from the experimental and theoretical evidence that self-motion perception normally is related to the visual scene as a reference, while the vestibular input is used to check the kinematic state of the scene; if the scene appears to move, the visual signal becomes suppressed and perception is based on the vestibular cue.  相似文献   

7.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

8.
 The receptive field organization of a class of visual interneurons in the fly brain (vertical system, or VS neurons) shows a striking similarity to certain self-motion-induced optic flow fields. The present study compares the measured motion sensitivities of the VS neurons (Krapp et al. 1998) to a matched filter model for optic flow fields generated by rotation or translation. The model minimizes the variance of the filter output caused by noise and distance variability between different scenes. To that end, prior knowledge about distance and self-motion statistics is incorporated in the form of a “world model”. We show that a special case of the matched filter model is able to predict the local motion sensitivities observed in some VS neurons. This suggests that their receptive field organization enables the VS neurons to maintain a consistent output when the same type of self-motion occurs in different situations. Received: 14 June 1999 / Accepted in revised form: 20 March 2000  相似文献   

9.
 This article describes a computational model for the sensory perception of self-motion, considered as a compromise between sensory information and physical coherence constraints. This compromise is realized by a dynamic optimization process minimizing a set of cost functions. Measure constraints are expressed as quadratic errors between motion estimates and corresponding sensory signals, using internal models of sensor transfer functions. Coherence constraints are expressed as quadratic errors between motion estimates, and their prediction is based on internal models of the physical laws governing the corresponding physical stimuli. This general scheme leads to a straightforward representation of fundamental sensory interactions (fusion of visual and canal rotational inputs, identification of the gravity component from the otolithic input, otolithic contribution to the perception of rotations, and influence of vection on the subjective vertical). The model is tuned and assessed using a range of well-known psychophysical results, including off-vertical axis rotations and centrifuge experiments. The ability of the model to predict and help analyze new situations is illustrated by a study of the vestibular contributions to self-motion perception during automobile driving and during acceleration cueing in driving simulators. The extendable structure of the model allows for further developments and applications, by using other cost functions representing additional sensory interactions. Received: 10 October 2000 / Accepted in revised form: 12 August 2002 Acknowledgements. This research was performed within the framework of a CIFRE grant (ANRT contract #331/97) for the doctoral work of G. Reymond at RENAULT and LPPA. The authors wish to thank the anonymous reviewers and Prof. H. Mittelstaedt for their valuable suggestions. Correspondence to: G. Reymond (e-mail: gilles.reymond@renault.com, Tel.: +33-1-34952170, Fax: +33-1-34952730)  相似文献   

10.

Background

The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion.

Methodology/Principal Findings

We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else''s body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms.

Conclusions/Significance

The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”.  相似文献   

11.
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.  相似文献   

12.
Angular and linear accelerations of the head occur throughout everyday life, whether from external forces such as in a vehicle or from volitional head movements. The relative timing of the angular and linear components of motion differs depending on the movement. The inner ear detects the angular and linear components with its semicircular canals and otolith organs, respectively, and secondary neurons in the vestibular nuclei receive input from these vestibular organs. Many secondary neurons receive both angular and linear input. Linear information alone does not distinguish between translational linear acceleration and angular tilt, with its gravity-induced change in the linear acceleration vector. Instead, motions are thought to be distinguished by use of both angular and linear information. However, for combined motions, composed of angular tilt and linear translation, the infinite range of possible relative timing of the angular and linear components gives an infinite set of motions among which to distinguish the various types of movement. The present research focuses on motions consisting of angular tilt and horizontal translation, both sinusoidal, where the relative timing, i.e. phase, of the tilt and translation can take any value in the range −180° to 180°. The results show how hypothetical neurons receiving convergent input can distinguish tilt from translation, and that each of these neurons has a preferred combined motion, to which the neuron responds maximally. Also shown are the values of angular and linear response amplitudes and phases that can cause a neuron to be tilt-only or translation-only. Such neurons turn out to be sufficient for distinguishing between combined motions, with all of the possible relative angular–linear phases. Combinations of other neurons, as well, are shown to distinguish motions. Relative response phases and in-phase firing-rate modulation are the key to identifying specific motions from within this infinite set of combined motions.  相似文献   

13.
14.
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment.  相似文献   

15.
Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.  相似文献   

16.
Complex self-motion stimulations in the dark can be powerfully disorienting and can create illusory motion percepts. In the absence of visual cues, the brain has to use angular and linear acceleration information provided by the vestibular canals and the otoliths, respectively. However, these sensors are inaccurate and ambiguous. We propose that the brain processes these signals in a statistically optimal fashion, reproducing the rules of Bayesian inference. We also suggest that this processing is related to the statistics of natural head movements. This would create a perceptual bias in favour of low velocity and acceleration. We have constructed a Bayesian model of self-motion perception based on these assumptions. Using this model, we have simulated perceptual responses to centrifugation and off-vertical axis rotation and obtained close agreement with experimental findings. This demonstrates how Bayesian inference allows to make a quantitative link between sensor noise and ambiguities, statistics of head movement, and the perception of self-motion.  相似文献   

17.
The relative role of visual and vestibular cues in determining the perceived distance of passive, linear self motion were assessed. Seventeen subjects were given cues to constant acceleration motion: either optic flow, physical motion in the dark or combinations of visual and physical motion. Subjects indicated when they perceived they had traversed a distance that had been previously indicated either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a visual target but was perceptually equivalent to a shorter physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self-motion when both visual and physical cues were present was perceptually equivalent to the physical motion experienced and not the simultaneous visual motion even when the target was presented visually. We describe this dominance of the physical cues in determining the perceived distance of self motion as "vestibular capture".  相似文献   

18.

Background

It is known that subjective contours are perceived even when a figure involves motion. However, whether this includes the perception of rigidity or deformation of an illusory surface remains unknown. In particular, since most visual stimuli used in previous studies were generated in order to induce illusory rigid objects, the potential perception of material properties such as rigidity or elasticity in these illusory surfaces has not been examined. Here, we elucidate whether the magnitude of phase difference in oscillation influences the visual impressions of an object''s elasticity (Experiment 1) and identify whether such elasticity perceptions are accompanied by the shape of the subjective contours, which can be assumed to be strongly correlated with the perception of rigidity (Experiment 2).

Methodology/Principal Findings

In Experiment 1, the phase differences in the oscillating motion of inducers were controlled to investigate whether they influenced the visual impression of an illusory object''s elasticity. The results demonstrated that the impression of the elasticity of an illusory surface with subjective contours was systematically flipped with the degree of phase difference. In Experiment 2, we examined whether the subjective contours of a perceived object appeared linear or curved using multi-dimensional scaling analysis. The results indicated that the contours of a moving illusory object were perceived as more curved than linear in all phase-difference conditions.

Conclusions/Significance

These findings suggest that the phase difference in an object''s motion is a significant factor in the material perception of motion-related elasticity.  相似文献   

19.
Our inner ear is equipped with a set of linear accelerometers, the otolith organs, that sense the inertial accelerations experienced during self-motion. However, as Einstein pointed out nearly a century ago, this signal would by itself be insufficient to detect our real movement, because gravity, another form of linear acceleration, and self-motion are sensed identically by otolith afferents. To deal with this ambiguity, it was proposed that neural populations in the pons and midline cerebellum compute an independent, internal estimate of gravity using signals arising from the vestibular rotation sensors, the semicircular canals. This hypothesis, regarding a causal relationship between firing rates and postulated sensory contributions to inertial motion estimation, has been directly tested here by recording neural activities before and after inactivation of the semicircular canals. We show that, unlike cells in normal animals, the gravity component of neural responses was nearly absent in canal-inactivated animals. We conclude that, through integration of temporally matched, multimodal information, neurons derive the mathematical signals predicted by the equations describing the physics of the outside world.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号