首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In contradistinction to conventional wisdom, we propose that retinal image slip of a visual scene (optokinetic pattern, OP) does not constitute the only crucial input for visually induced percepts of self-motion (vection). Instead, the hypothesis is investigated that there are three input factors: 1) OP retinal image slip, 2) motion of the ocular orbital shadows across the retinae, and 3) smooth pursuit eye movements (efference copy). To test this hypothesis, we visually induced percepts of sinusoidal rotatory self-motion (circular vection, CV) in the absence of vestibular stimulation. Subjects were presented with three concurrent stimuli: a large visual OP, a fixation point to be pursued with the eyes (both projected in superposition on a semi-circular screen), and a dark window frame placed close to the eyes to create artificial visual field boundaries that simulate ocular orbital rim boundary shadows, but which could be moved across the retinae independent from eye movements. In different combinations these stimuli were independently moved or kept stationary. When moved together (horizontally and sinusoidally around the subject's head), they did so in precise temporal synchrony at 0.05 Hz. The results show that the occurrence of CV requires retinal slip of the OP and/or relative motion between the orbital boundary shadows and the OP. On the other hand, CV does not develop when the two retinal slip signals equal each other (no relative motion) and concur with pursuit eye movements (as it is the case, e.g., when we follow with the eyes the motion of a target on a stationary visual scene). The findings were formalized in terms of a simulation model. In the model two signals coding relative motion between OP and head are fused and fed into the mechanism for CV, a visuo-oculomotor one, derived from OP retinal slip and eye movement efference copy, and a purely visual signal of relative motion between the orbital rims (head) and the OP. The latter signal is also used, together with a version of the oculomotor efference copy, for a mechanism that suppresses CV at a later stage of processing in conditions in which the retinal slip signals are self-generated by smooth pursuit eye movements.  相似文献   

2.
Little is known about mechanisms mediating a stable perception of the world during pursuit eye movements. Here, we used fMRI to determine to what extent human motion-responsive areas integrate planar retinal motion with nonretinal eye movement signals in order to discard self-induced planar retinal motion and to respond to objective ("real") motion. In?contrast to other areas, V3A lacked responses to?self-induced planar retinal motion but responded strongly to head-centered motion, even when retinally canceled by pursuit. This indicates a near-complete multimodal integration of visual with nonvisual planar motion signals in V3A. V3A could be mapped selectively and robustly in every single subject on this basis. V6 also reported head-centered planar motion, even when 3D flow was added to it, but was suppressed by retinal planar motion. These findings suggest a dominant contribution of human areas V3A and V6 to head-centered motion perception and to perceptual stability during eye movements.  相似文献   

3.
Because of the limited processing capacity of eyes, retinal networks must adapt constantly to best present the ever changing visual world to the brain. However, we still know little about how adaptation in retinal networks shapes neural encoding of changing information. To study this question, we recorded voltage responses from photoreceptors (R1–R6) and their output neurons (LMCs) in the Drosophila eye to repeated patterns of contrast values, collected from natural scenes. By analyzing the continuous photoreceptor-to-LMC transformations of these graded-potential neurons, we show that the efficiency of coding is dynamically improved by adaptation. In particular, adaptation enhances both the frequency and amplitude distribution of LMC output by improving sensitivity to under-represented signals within seconds. Moreover, the signal-to-noise ratio of LMC output increases in the same time scale. We suggest that these coding properties can be used to study network adaptation using the genetic tools in Drosophila, as shown in a companion paper (Part II).  相似文献   

4.
It has been well known that the canal driven vestibulo-ocular reflex (VOR) is controlled and modulated through the central nervous system by external sensory information (e.g. visual, otolithic and somatosensory inputs) and by mental conditions. Because the origin of retinal image motion exists both in the subjects (eye, head and body motions) and in the external world (object motion), the head motion should be canceled and/or the object should be followed by smooth eye movements. Human has developed a lot of central nervous mechanisms for smooth eye movements (e.g. VOR, optokinetic reflex and smooth pursuit eye movements). These mechanisms are thought to work for the purpose of better seeing. Distinct mechanism will work in appropriate self motion and/or object motion. As the results, whole mechanisms are controlled in a purpose-directed manner. This can be achieved by a self-organizing holistic system. Holistic system is very useful for understanding human oculomotor behavior.  相似文献   

5.
T Haarmeier  F Bunjes  A Lindner  E Berret  P Thier 《Neuron》2001,32(3):527-535
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion.  相似文献   

6.
To maintain optimal clarity of objects moving slowly in three dimensional space, frontal eyed-primates use both smooth-pursuit and vergence (depth) eye movements to track precisely those objects and maintain their images on the foveae of left and right eyes. The caudal parts of the frontal eye fields contain neurons that discharge during smooth-pursuit. Recent results have provided a new understanding of the roles of the frontal eye field pursuit area and suggest that it may control the gain of pursuit eye movements, code predictive visual signals that drive pursuit, and code commands for smooth eye movements in a three dimensional coordinate frame.  相似文献   

7.
Smooth pursuit eye movements change the retinal image velocity of objects in the visual field. In order to change from a retinocentric frame of reference into a head-centric one, the visual system has to take the eye movements into account. Studies on motion perception during smooth pursuit eye movements have measured either perceived speed or perceived direction during smooth pursuit to investigate this frame of reference transformation, but never both at the same time. We devised a new velocity matching task, in which participants matched both perceived speed and direction during fixation to that during pursuit. In Experiment 1, the velocity matches were determined for a range of stimulus directions, with the head-centric stimulus speed kept constant. In Experiment 2, the retinal stimulus speed was kept approximately constant, with the same range of stimulus directions. In both experiments, the velocity matches for all directions were shifted against the pursuit direction, suggesting an incomplete transformation of the frame of reference. The degree of compensation was approximately constant across stimulus direction. We fitted the classical linear model, the model of Turano and Massof (2001) and that of Freeman (2001) to the velocity matches. The model of Turano and Massof fitted the velocity matches best, but the differences between de model fits were quite small. Evaluation of the models and comparison to a few alternatives suggests that further specification of the potential effect of retinal image characteristics on the eye movement signal is needed.  相似文献   

8.
Smooth pursuit eye movements provide a good model system for cerebellar studies of complex motor control in monkeys. First, the pursuit system exhibits predictive control along complex trajectories and this control improves with training. Second, the flocculus/paraflocculus region of the cerebellum appears to generate this control. Lesions impair pursuit and neural activity patterns are closely related to eye motion during complex pursuit. Importantly, neural responses lead eye motion during predictive pursuit and lag eye motion during non-predictable target motions that require visual control. The idea that flocculus/paraflocculus predictive control is non-visual is also supported by a lack of correlation between neural activity and retinal image motion during pursuit. Third, biologically accurate neural network models of the flocculus/paraflocculus allow the exploration and testing of pursuit mechanisms. Our current model can generate predictive control without visual input in a manner that is compatible with the extensive experimental data available for this cerebellar system. Similar types of non-visual cerebellar control are likely to facilitate the wide range of other skilled movements that are observed.  相似文献   

9.
The posterior parietal cortex has long been considered an ''association'' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author''s laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer''s movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

10.
Lesion to the posterior parietal cortex in monkeys and humans produces spatial deficits in movement and perception. In recording experiments from area 7a, a cortical subdivision in the posterior parietal cortex in monkeys, we have found neurons whose responses are a function of both the retinal location of visual stimuli and the position of the eyes in the orbits. By combining these signals area 7 a neurons code the location of visual stimuli with respect to the head. However, these cells respond over only limited ranges of eye positions (eye-position-dependent coding). To code location in craniotopic space at all eye positions (eye-position-independent coding) an additional step in neural processing is required that uses information distributed across populations of area 7a neurons. We describe here a neural network model, based on back-propagation learning, that both demonstrates how spatial location could be derived from the population response of area 7a neurons and accurately accounts for the observed response properties of these neurons.  相似文献   

11.
Zhang T  Heuer HW  Britten KH 《Neuron》2004,42(6):993-1001
The ventral intraparietal area (VIP) is a multimodal parietal area, where visual responses are brisk, directional, and typically selective for complex optic flow patterns. VIP thus could provide signals useful for visual estimation of heading (self-motion direction). A central problem in heading estimation is how observers compensate for eye velocity, which distorts the retinal motion cues upon which perception depends. To find out if VIP could be useful for heading, we measured its responses to simulated trajectories, both with and without eye movements. Our results showed that most VIP neurons very strongly signal heading direction. Furthermore, the tuning of most VIP neurons was remarkably stable in the presence of eye movements. This stability was such that the population of VIP neurons represented heading very nearly in head-centered coordinates. This makes VIP the most robust source of such signals yet described, with properties ideal for supporting perception.  相似文献   

12.
We report a model that reproduces many of the behavioral properties of smooth pursuit eye movements. The model is a negative-feedback system that uses three parallel visual motion pathways to drive pursuit. The three visual pathways process image motion, defined as target motion with respect to the moving eye, and provide signals related to image velocity, image acceleration, and a transient that occurs at the onset of target motion. The three visual motion signals are summed and integrated to produce the eye velocity output of the model. The model reproduces the average eye velocity evoked by steps of target velocity in monkeys and humans and accounts for the variation among individual responses and subjects. When its motor pathways are expanded to include positive feedback of eye velocity and a switch, the model reproduces the exponential decay in eye velocity observed when a moving target stops. Manipulation of this expanded model can mimic the effects of stimulation and lesions in the arcuate pursuit area, the middle temporal visual area (MT), and the medial superior temporal visual area (MST).  相似文献   

13.
Eye movements constitute one of the most basic means of interacting with our environment, allowing to orient to, localize and scrutinize the variety of potentially interesting objects that surround us. In this review we discuss the role of the parietal cortex in the control of saccadic and smooth pursuit eye movements, whose purpose is to rapidly displace the line of gaze and to maintain a moving object on the central retina, respectively. From single cell recording studies in monkey we know that distinct sub-regions of the parietal lobe are implicated in these two kinds of movement. The middle temporal (MT) and medial superior temporal (MST) areas show neuronal activities related to moving visual stimuli and to ocular pursuit. The lateral intraparietal (LIP) area exhibits visual and saccadic neuronal responses. Electrophysiology, which in essence is a correlation method, cannot entirely solve the question of the functional implication of these areas: are they primarily involved in sensory processing, in motor processing, or in some intermediate function? Lesion approaches (reversible or permanent) in the monkey can provide important information in this respect. Lesions of MT or MST produce deficits in the perception of visual motion, which would argue for their possible role in sensory guidance of ocular pursuit rather than in directing motor commands to the eye muscle. Lesions of LIP do not produce specific visual impairments and cause only subtle saccadic deficits. However, recent results have shown the presence of severe deficits in spatial attention tasks. LIP could thus be implicated in the selection of relevant objects in the visual scene and provide a signal for directing the eyes toward these objects. Functional imaging studies in humans confirm the role of the parietal cortex in pursuit, saccadic, and attentional networks, and show a high degree of overlap with monkey data. Parietal lobe lesions in humans also result in behavioral deficits very similar to those that are observed in the monkey. Altogether, these different sources of data consistently point to the involvement of the parietal cortex in the representation of space, at an intermediate stage between vision and action.  相似文献   

14.
A translating eye receives a radial pattern of motion that is centered on the direction of heading. If the eye is rotating and translating, visual and extraretinal signals help to cancel the rotation and to perceive heading correctly. This involves (1) an interaction between visual and eye movement signals and (2) a motion template stage that analyzes the pattern of visual motion. Early interaction leads to motion templates that integrate head-centered motion signals in the visual field. Integration of retinal motion signals leads to late interaction. Here, we show that retinal flow limits precision of heading. This result argues against an early, vector subtraction type of interaction, but is consistent with a late, gain field type of interaction with eye velocity signals and neurophysiological findings in area MST of the monkey.  相似文献   

15.
Spatial updating in human parietal cortex   总被引:13,自引:0,他引:13  
Merriam EP  Genovese CR  Colby CL 《Neuron》2003,39(2):361-373
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.  相似文献   

16.
Visual perception is burdened with a highly discontinuous input stream arising from saccadic eye movements. For successful integration into a coherent representation, the visuomotor system needs to deal with these self-induced perceptual changes and distinguish them from external motion. Forward models are one way to solve this problem where the brain uses internal monitoring signals associated with oculomotor commands to predict the visual consequences of corresponding eye movements during active exploration. Visual scenes typically contain a rich structure of spatial relational information, providing additional cues that may help disambiguate self-induced from external changes of perceptual input. We reasoned that a weighted integration of these two inherently noisy sources of information should lead to better perceptual estimates. Volunteer subjects performed a simple perceptual decision on the apparent displacement of a visual target, jumping unpredictably in sync with a saccadic eye movement. In a critical test condition, the target was presented together with a flanker object, where perceptual decisions could take into account the spatial distance between target and flanker object. Here, precision was better compared to control conditions in which target displacements could only be estimated from either extraretinal or visual relational information alone. Our findings suggest that under natural conditions, integration of visual space across eye movements is based upon close to optimal integration of both retinal and extraretinal pieces of information.  相似文献   

17.
How our vision remains stable in spite of the interruptions produced by saccadic eye movements has been a repeatedly revisited perceptual puzzle. The major hypothesis is that a corollary discharge (CD) or efference copy signal provides information that the eye has moved, and this information is used to compensate for the motion. There has been progress in the search for neuronal correlates of such a CD in the monkey brain, the best animal model of the human visual system. In this article, we briefly summarize the evidence for a CD pathway to frontal cortex, and then consider four questions on the relation of neuronal mechanisms in the monkey brain to stable visual perception. First, how can we determine whether the neuronal activity is related to stable visual perception? Second, is the activity a possible neuronal correlate of the proposed transsaccadic memory hypothesis of visual stability? Third, are the neuronal mechanisms modified by visual attention and does our perceived visual stability actually result from neuronal mechanisms related primarily to the central visual field? Fourth, does the pathway from superior colliculus through the pulvinar nucleus to visual cortex contribute to visual stability through suppression of the visual blur produced by saccades?  相似文献   

18.
Biber U  Ilg UJ 《PloS one》2011,6(1):e16265
Eye movements create an ever-changing image of the world on the retina. In particular, frequent saccades call for a compensatory mechanism to transform the changing visual information into a stable percept. To this end, the brain presumably uses internal copies of motor commands. Electrophysiological recordings of visual neurons in the primate lateral intraparietal cortex, the frontal eye fields, and the superior colliculus suggest that the receptive fields (RFs) of special neurons shift towards their post-saccadic positions before the onset of a saccade. However, the perceptual consequences of these shifts remain controversial. We wanted to test in humans whether a remapping of motion adaptation occurs in visual perception.The motion aftereffect (MAE) occurs after viewing of a moving stimulus as an apparent movement to the opposite direction. We designed a saccade paradigm suitable for revealing pre-saccadic remapping of the MAE. Indeed, a transfer of motion adaptation from pre-saccadic to post-saccadic position could be observed when subjects prepared saccades. In the remapping condition, the strength of the MAE was comparable to the effect measured in a control condition (33±7% vs. 27±4%). Contrary, after a saccade or without saccade planning, the MAE was weak or absent when adaptation and test stimulus were located at different retinal locations, i.e. the effect was clearly retinotopic. Regarding visual cognition, our study reveals for the first time predictive remapping of the MAE but no spatiotopic transfer across saccades. Since the cortical sites involved in motion adaptation in primates are most likely the primary visual cortex and the middle temporal area (MT/V5) corresponding to human MT, our results suggest that pre-saccadic remapping extends to these areas, which have been associated with strict retinotopy and therefore with classical RF organization. The pre-saccadic transfer of visual features demonstrated here may be a crucial determinant for a stable percept despite saccades.  相似文献   

19.
Visual illusions are valuable tools for the scientific examination of the mechanisms underlying perception. In the peripheral drift illusion special drift patterns appear to move although they are static. During fixation small involuntary eye movements generate retinal image slips which need to be suppressed for stable perception. Here we show that the peripheral drift illusion reveals the mechanisms of perceptual stabilization associated with these micromovements. In a series of experiments we found that illusory motion was only observed in the peripheral visual field. The strength of illusory motion varied with the degree of micromovements. However, drift patterns presented in the central (but not the peripheral) visual field modulated the strength of illusory peripheral motion. Moreover, although central drift patterns were not perceived as moving, they elicited illusory motion of neutral peripheral patterns. Central drift patterns modulated illusory peripheral motion even when micromovements remained constant. Interestingly, perceptual stabilization was only affected by static drift patterns, but not by real motion signals. Our findings suggest that perceptual instabilities caused by fixational eye movements are corrected by a mechanism that relies on visual rather than extraretinal (proprioceptive or motor) signals, and that drift patterns systematically bias this compensatory mechanism. These mechanisms may be revealed by utilizing static visual patterns that give rise to the peripheral drift illusion, but remain undetected with other patterns. Accordingly, the peripheral drift illusion is of unique value for examining processes of perceptual stabilization.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号