首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.  相似文献   

2.
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant''s vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”.  相似文献   

3.
It is still an enigma how human subjects combine visual and vestibular inputs for their self-motion perception. Visual cues have the benefit of high spatial resolution but entail the danger of self motion illusions. We performed psychophysical experiments (verbal estimates as well as pointer indications of perceived self-motion in space) in normal subjects (Ns) and patients with loss of vestibular function (Ps). Subjects were presented with horizontal sinusoidal rotations of an optokinetic pattern (OKP) alone (visual stimulus; 0.025-3.2 Hz; displacement amplitude, 8 degrees) or in combinations with rotations of a Bárány chair (vestibular stimulus; 0.025-0.4 Hz; +/- 8 degrees). We found that specific instructions to the subjects created different perceptual states in which their self-motion perception essentially reflected three processing steps during pure visual stimulation: i) When Ns were primed by a procedure based on induced motion and then they estimated perceived self-rotation upon pure optokinetic stimulation (circular vection, CV), the CV has a gain close to unity up to frequencies of almost 0.8 Hz, followed by a sharp decrease at higher frequencies (i.e., characteristics resembling those of the optokinetic reflex, OKR, and of smooth pursuit, SP). ii) When Ns were instructed to "stare through" the optokinetic pattern, CV was absent at high frequency, but increasingly developed as frequency was decreased below 0.1 Hz. iii) When Ns "looked at" the optokinetic pattern (accurately tracked it with their eyes) CV was usually absent, even at low frequency. CV in Ps showed similar dynamics as in Ns in condition i), independently of the instruction. During vestibular stimulation, self-motion perception in Ns fell from a maximum at 0.4 Hz to zero at 0.025 Hz. When vestibular stimulation was combined with visual stimulation while Ns "stared through" OKP, perception at low frequencies became modulated in magnitude. When Ns "looked" at OKP, this modulation was reduced, apart from the synergistic stimulus combination (OKP stationary) where magnitude was similar as during "staring". The obtained gain and phase curves of the perception were incompatible with linear systems prediction. We therefore describe the present findings by a non-linear dynamic model in which the visual input is processed in three steps: i) It shows dynamics similar to those of OKR and SP; ii) it is shaped to complement the vestibular dynamics and is fused with a vestibular signal by linear summation; and iii) it can be suppressed by a visual-vestibular conflict mechanism when the visual scene is moving in space. Finally, an important element of the model is a velocity threshold of about 1.2 degrees/s which is instrumental in maintaining perceptual stability and in explaining the observed dynamics of perception. We conclude from the experimental and theoretical evidence that self-motion perception normally is related to the visual scene as a reference, while the vestibular input is used to check the kinematic state of the scene; if the scene appears to move, the visual signal becomes suppressed and perception is based on the vestibular cue.  相似文献   

4.
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.  相似文献   

5.

Objective

Vection, a feeling of self-motion while being physically stationary, and postural sway can be modulated by various visual factors. Moreover, vection and postural sway are often found to be closely related when modulated by such visual factors, suggesting a common neural mechanism. One well-known visual factor is the depth order of the stimulus. The density, i.e. number of objects per unit area, is proposed to interact with the depth order in the modulation of vection and postural sway, which has only been studied to a limited degree.

Methods

We therefore exposed 17 participants to 18 different stimuli containing a stationary pattern and a pattern rotating around the naso-occipital axis. The density of both patterns was varied between 10 and 90%; the densities combined always added up to 100%. The rotating pattern occluded or was occluded by the stationary pattern, suggesting foreground or background motion, respectively. During pattern rotation participants reported vection by pressing a button, and postural sway was recorded using a force plate.

Results

Participants always reported more vection and swayed significantly more when rotation was perceived in the background and when the rotating pattern increased in density. As hypothesized, we found that the perceived depth order interacted with pattern density. A pattern rotating in the background with a density between 60 and 80% caused significantly more vection and postural sway than when it was perceived to rotate in the foreground.

Conclusions

The findings suggest that the ratio between fore- and background pattern densities is an important factor in the interaction with the depth order, and it is not the density of rotating pattern per se. Moreover, the observation that vection and postural sway were modulated in a similar way points towards a common neural origin regulating both variables.  相似文献   

6.
The relative role of visual and vestibular cues in determining the perceived distance of passive, linear self motion were assessed. Seventeen subjects were given cues to constant acceleration motion: either optic flow, physical motion in the dark or combinations of visual and physical motion. Subjects indicated when they perceived they had traversed a distance that had been previously indicated either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a visual target but was perceptually equivalent to a shorter physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self-motion when both visual and physical cues were present was perceptually equivalent to the physical motion experienced and not the simultaneous visual motion even when the target was presented visually. We describe this dominance of the physical cues in determining the perceived distance of self motion as "vestibular capture".  相似文献   

7.
The mechanism of positional localization has recently been debated due to interest in the flash-lag effect, which occurs when a briefly flashed stationary stimulus is perceived to lag behind a spatially aligned moving stimulus. Here we report positional localization observed at motion offsets as well as at onsets. In the 'flash-lead' effect, a moving object is perceived to be behind a spatially concurrent stationary flash before the two disappear. With 'reverse-repmo', subjects mis-localize the final position of a moving bar in the direction opposite to the trajectory of motion. Finally, we demonstrate that simultaneous onset and offset effects lead to a perceived compression of visual space. By characterizing illusory effects observed at motion offsets as well as at onsets, we provide evidence that the perceived position of a moving object is the result of an averaging process over a short time period, weighted towards the most recent positions. Our account explains a variety of motion illusions, including the compression of moving shapes when viewed through apertures.  相似文献   

8.
This study mathematically characterizes the results of DiZio and Lackner (Percept Psychphys 39(1): 39–46) on the perception of self-orientation during circular vection induced by an optokinetic stimulus. Using the hypothesis of perceptual centering, it is shown that five basic centering transformations can logically account for the full range of illusions reported by the subjects. All five of these transformations center the perceived orientations of body components, the rotating disk, and gravity : two align the perceived visual and inertial rotation axes, one centers the perceived axis of visual rotation in front of the head, and two straighten the perceived neck angle. These transformations generate a mathematical semigroup. Application of the semigroup to an actual stimulus condition generates an orbit of predicted illusions. The semigroup analysis of perceptual centering predicts all of the illusions observed in the experiments of DiZio and Lackner (Percept Psychphys 39(1): 39–46). Moreover, the structure of perceptual centering (1) provides a logical explanation for the occurrence of those misperceptions; and (2) predicts the complete set of perceptions that are expected to occur in a larger sample. In addition, our analysis predicts illusions in experimental conditions not yet investigated  相似文献   

9.
The effect of overlapping dynamic visual noise on visually induced self-motion perception (vection) by upward or downward optical flow was tested. The dynamic visual noise consisted of rapidly refreshed sparse random dots. Binocular disparity of the overlapping noise plane was varied. The results showed that when the noise was presented on the flow plane or on a plane farther than the flow plane, vection was totally impaired. This demonstrates that dynamic visual noise is functionally equivalent to static patterns in the vection suppression effect. A possibility of dynamic visual noise as a vection suppressor in an application on a 3-D display is discussed in relation to simulator sickness.  相似文献   

10.
This article addresses the intersection between perceptual estimates of head motion based on purely vestibular and purely visual sensation, by considering how nonvisual (e.g. vestibular and proprioceptive) sensory signals for head and eye motion can be combined with visual signals available from a single landmark to generate a complete perception of self-motion. In order to do this, mathematical dimensions of sensory signals and perceptual parameterizations of self-motion are evaluated, and equations for the sensory-to-perceptual transition are derived. With constant velocity translation and vision of a single point, it is shown that visual sensation allows only for the externalization, to the frame of reference given by the landmark, of an inertial self-motion estimate from nonvisual signals. However, it is also shown that, with nonzero translational acceleration, use of simple visual signals provides a biologically plausible strategy for integration of inertial acceleration sensation, to recover translational velocity. A dimension argument proves similar results for horizontal flow of any number of discrete visible points. The results provide insight into the convergence of visual and vestibular sensory signals for self-motion and indicate perceptual algorithms by which primitive visual and vestibular signals may be integrated for self-motion perception.  相似文献   

11.
The motion aftereffect may be considered as a consequence of visual illusions of self-motion (vection) and the persistence of sensory information processing. There is ample experimental evidence indicating a uniformity of mechanisms that underlie motion aftereffects in different modalities based on the principle of motion detectors. Currently, there is firm ground to believe that the motion aftereffect is intrinsic to all sensory systems involved in spatial orientation, that motion adaptation in one sensory system elicits changes in another one, and that such adaptation is of great adaptive importance for spatial orientation and motion of an organism. This review seeks to substantiate these ideas.  相似文献   

12.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

13.
Pei YC  Hsiao SS  Craig JC  Bensmaia SJ 《Neuron》2011,69(3):536-547
How are local motion signals integrated to form a global motion percept? We investigate the neural mechanisms of tactile motion integration by presenting tactile gratings and plaids to the fingertips of monkeys, using the tactile analogue of a visual monitor and recording the responses evoked in somatosensory cortical neurons. The perceived directions of the gratings and plaids are measured in parallel psychophysical experiments. We identify a population of somatosensory neurons that exhibit integration properties comparable to those induced by analogous visual stimuli in area MT and find that these neural responses account for the perceived direction of the stimuli across all stimulus conditions tested. The preferred direction of the neurons and the perceived direction of the stimuli can be predicted from the weighted average of the directions of the individual stimulus features, highlighting that the somatosensory system implements a vector average mechanism to compute tactile motion direction that bears striking similarities to its visual counterpart.  相似文献   

14.
To interpret visual scenes, visual systems need to segment or integrate multiple moving features into distinct objects or surfaces. Previous studies have found that the perceived direction separation between two transparently moving random-dot stimuli is wider than the actual direction separation. This perceptual “direction repulsion” is useful for segmenting overlapping motion vectors. Here we investigate the effects of motion noise on the directional interaction between overlapping moving stimuli. Human subjects viewed two overlapping random-dot patches moving in different directions and judged the direction separation between the two motion vectors. We found that the perceived direction separation progressively changed from wide to narrow as the level of motion noise in the stimuli was increased, showing a switch from direction repulsion to attraction (i.e. smaller than the veridical direction separation). We also found that direction attraction occurred at a wider range of direction separations than direction repulsion. The normalized effects of both direction repulsion and attraction were the strongest near the direction separation of ∼25° and declined as the direction separation further increased. These results support the idea that motion noise prompts motion integration to overcome stimulus ambiguity. Our findings provide new constraints on neural models of motion transparency and segmentation.  相似文献   

15.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

16.
In Li and Atick's [1, 2] theory of efficient stereo coding, the two eyes' signals are transformed into uncorrelated binocular summation and difference signals, and gain control is applied to the summation and differencing channels to optimize their sensitivities. In natural vision, the optimal channel sensitivities vary from moment to moment, depending on the strengths of the summation and difference signals; these channels should therefore be separately adaptable, whereby a channel's sensitivity is reduced following overexposure to adaptation stimuli that selectively stimulate that channel. This predicts a remarkable effect of binocular adaptation on perceived direction of a dichoptic motion stimulus [3]. For this stimulus, the summation and difference signals move in opposite directions, so perceived motion direction (upward or downward) should depend on which of the two binocular channels is most strongly adapted, even if the adaptation stimuli are completely static. We confirmed this prediction: a single static dichoptic adaptation stimulus presented for less than 1 s can control perceived direction of a subsequently presented dichoptic motion stimulus. This is not predicted by any current model of motion perception and suggests that the visual cortex quickly adapts to the prevailing binocular image statistics to maximize information-coding efficiency.  相似文献   

17.
Perceptual decision making has been widely studied using tasks in which subjects are asked to discriminate a visual stimulus and instructed to report their decision with a movement. In these studies, performance is measured by assessing the accuracy of the participants’ choices as a function of the ambiguity of the visual stimulus. Typically, the reporting movement is considered as a mere means of reporting the decision with no influence on the decision-making process. However, recent studies have shown that even subtle differences of biomechanical costs between movements may influence how we select between them. Here we investigated whether this purely motor cost could also influence decisions in a perceptual discrimination task in detriment of accuracy. In other words, are perceptual decisions only dependent on the visual stimulus and entirely orthogonal to motor costs? Here we show the results of a psychophysical experiment in which human subjects were presented with a random dot motion discrimination task and asked to report the perceived motion direction using movements of different biomechanical cost. We found that the pattern of decisions exhibited a significant bias towards the movement of lower cost, even when this bias reduced performance accuracy. This strongly suggests that motor costs influence decision making in visual discrimination tasks for which its contribution is neither instructed nor beneficial.  相似文献   

18.
In contradistinction to conventional wisdom, we propose that retinal image slip of a visual scene (optokinetic pattern, OP) does not constitute the only crucial input for visually induced percepts of self-motion (vection). Instead, the hypothesis is investigated that there are three input factors: 1) OP retinal image slip, 2) motion of the ocular orbital shadows across the retinae, and 3) smooth pursuit eye movements (efference copy). To test this hypothesis, we visually induced percepts of sinusoidal rotatory self-motion (circular vection, CV) in the absence of vestibular stimulation. Subjects were presented with three concurrent stimuli: a large visual OP, a fixation point to be pursued with the eyes (both projected in superposition on a semi-circular screen), and a dark window frame placed close to the eyes to create artificial visual field boundaries that simulate ocular orbital rim boundary shadows, but which could be moved across the retinae independent from eye movements. In different combinations these stimuli were independently moved or kept stationary. When moved together (horizontally and sinusoidally around the subject's head), they did so in precise temporal synchrony at 0.05 Hz. The results show that the occurrence of CV requires retinal slip of the OP and/or relative motion between the orbital boundary shadows and the OP. On the other hand, CV does not develop when the two retinal slip signals equal each other (no relative motion) and concur with pursuit eye movements (as it is the case, e.g., when we follow with the eyes the motion of a target on a stationary visual scene). The findings were formalized in terms of a simulation model. In the model two signals coding relative motion between OP and head are fused and fed into the mechanism for CV, a visuo-oculomotor one, derived from OP retinal slip and eye movement efference copy, and a purely visual signal of relative motion between the orbital rims (head) and the OP. The latter signal is also used, together with a version of the oculomotor efference copy, for a mechanism that suppresses CV at a later stage of processing in conditions in which the retinal slip signals are self-generated by smooth pursuit eye movements.  相似文献   

19.
Motion of visual scene (optokinetic stimulus) projected on a wide screen frequently induces motion sickness. Rotational movements of 3D visual images were analyzed to examine what factors are effective in visually-induced motion sickness and how the gravity contributes to its inducement. While an angle of a rotational axis of 3D visual image from the gravitational direction and its angle from the subjective vertical which was perceived by viewers through 3D visual image were varied, the severity of visually-induced motion sickness was measured.  相似文献   

20.
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号