首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
How our vision remains stable in spite of the interruptions produced by saccadic eye movements has been a repeatedly revisited perceptual puzzle. The major hypothesis is that a corollary discharge (CD) or efference copy signal provides information that the eye has moved, and this information is used to compensate for the motion. There has been progress in the search for neuronal correlates of such a CD in the monkey brain, the best animal model of the human visual system. In this article, we briefly summarize the evidence for a CD pathway to frontal cortex, and then consider four questions on the relation of neuronal mechanisms in the monkey brain to stable visual perception. First, how can we determine whether the neuronal activity is related to stable visual perception? Second, is the activity a possible neuronal correlate of the proposed transsaccadic memory hypothesis of visual stability? Third, are the neuronal mechanisms modified by visual attention and does our perceived visual stability actually result from neuronal mechanisms related primarily to the central visual field? Fourth, does the pathway from superior colliculus through the pulvinar nucleus to visual cortex contribute to visual stability through suppression of the visual blur produced by saccades?  相似文献   

2.
D Cheong  JK Zubieta  J Liu 《PloS one》2012,7(6):e39854
Predicting the trajectories of moving objects in our surroundings is important for many life scenarios, such as driving, walking, reaching, hunting and combat. We determined human subjects' performance and task-related brain activity in a motion trajectory prediction task. The task required spatial and motion working memory as well as the ability to extrapolate motion information in time to predict future object locations. We showed that the neural circuits associated with motion prediction included frontal, parietal and insular cortex, as well as the thalamus and the visual cortex. Interestingly, deactivation of many of these regions seemed to be more closely related to task performance. The differential activity during motion prediction vs. direct observation was also correlated with task performance. The neural networks involved in our visual motion prediction task are significantly different from those that underlie visual motion memory and imagery. Our results set the stage for the examination of the effects of deficiencies in these networks, such as those caused by aging and mental disorders, on visual motion prediction and its consequences on mobility related daily activities.  相似文献   

3.
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.  相似文献   

4.
How our perceptual experience of the world remains stable and continuous in the face of continuous rapid eye movements still remains a mystery. This review discusses some recent progress towards understanding the neural and psychophysical processes that accompany these eye movements. We firstly report recent evidence from imaging studies in humans showing that many brain regions are tuned in spatiotopic coordinates, but only for items that are actively attended. We then describe a series of experiments measuring the spatial and temporal phenomena that occur around the time of saccades, and discuss how these could be related to visual stability. Finally, we introduce the concept of the spatio-temporal receptive field to describe the local spatiotopicity exhibited by many neurons when the eyes move.  相似文献   

5.
Burr D 《Current biology : CB》2004,14(5):R195-R197
A long-standing problem for vision researchers is how our perception of the world remains stable despite the continual motion of our eyes. Three recent studies begin to shed light on how the visual system suppresses the motion generated by these eye movements.  相似文献   

6.
Image motion is a primary source of visual information about the world. However, before this information can be used the visual system must determine the spatio-temporal displacements of the features in the dynamic retinal image, which originate from objects moving in space. This is known as the motion correspondence problem. We investigated whether cross-cue matching constraints contribute to the solution of this problem, which would be consistent with physiological reports that many directionally selective cells in the visual cortex also respond to additional visual cues. We measured the maximum displacement limit (Dmax) for two-frame apparent motion sequences. Dmax increases as the number of elements in such sequences decreases. However, in our displays the total number of elements was kept constant while the number of a subset of elements, defined by a difference in contrast polarity, binocular disparity or colour, was varied. Dmax increased as the number of elements distinguished by a particular cue was decreased. Dmax was affected by contrast polarity for all observers, but only some observers were influenced by binocular disparity and others by colour information. These results demonstrate that the human visual system exploits local, cross-cue matching constraints in the solution of the motion correspondence problem.  相似文献   

7.
How our perceptual experience of the world remains stable and continuous despite the frequent repositioning eye movements remains very much a mystery. One possibility is that our brain actively constructs a spatiotopic representation of the world, which is anchored in external--or at least head-centred--coordinates. In this study, we show that the positional motion aftereffect (the change in apparent position after adaptation to motion) is spatially selective in external rather than retinal coordinates, whereas the classic motion aftereffect (the illusion of motion after prolonged inspection of a moving source) is selective in retinotopic coordinates. The results provide clear evidence for a spatiotopic map in humans: one which can be influenced by image motion.  相似文献   

8.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

9.
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.  相似文献   

10.
The neural substrate of the phenomenological experience of a stable visual world remains obscure. One possible mechanism would be to construct spatiotopic neural maps where the response is selective to the position of the stimulus in external space, rather than to retinal eccentricities, but evidence for these maps has been inconsistent. Here we show, with fMRI, that when human subjects perform concomitantly a demanding attentive task on stimuli displayed at the fovea, BOLD responses evoked by moving stimuli irrelevant to the task were mostly tuned in retinotopic coordinates. However, under more unconstrained conditions, where subjects could attend easily to the motion stimuli, BOLD responses were tuned not in retinal but in external coordinates (spatiotopic selectivity) in many visual areas, including MT, MST, LO and V6, agreeing with our previous fMRI study. These results indicate that spatial attention may play an important role in mediating spatiotopic selectivity.  相似文献   

11.
Zanker JM 《Spatial Vision》2004,17(1-2):75-94
Arts history tells an exciting story about repeated attempts to represent features that are crucial for the understanding of our environment and which, at the same time, go beyond the inherently two-dimensional nature of a flat painting surface: depth and motion. In the twentieth century, Op artists such as Bridget Riley began to experiment with simple black and white patterns that do not represent motion in an artistic way but actually create vivid dynamic illusions in static pictures. The cause of motion illusions in such paintings is still a matter of debate. The role of involuntary eye movements in this phenomenon is studied here with a computational approach. The possible consequences of shifting the retinal image of synthetic wave gratings, dubbed as 'riloids', were analysed by a two-dimensional array of motion detectors (2DMD model), which generates response maps representing the spatial distribution of motion signals generated by such a stimulus. For a two-frame sequence reflecting a saccadic displacement, these motion signal maps contain extended patches in which local directions change only little. These directions, however, do not usually precisely correspond to the direction of pattern displacement that can be expected from the geometry of the curved gratings as an instance of the so-called 'aperture problem'. The patchy structure of the simulated motion detector response to the displacement of riloids resembles the motion illusion, which is not perceived as a coherent shift of the whole pattern but as a wobbling and jazzing of ill-defined regions. Although other explanations are not excluded, this might support the view that the puzzle of Op Art motion illusions could potentially have an almost trivial solution in terms of small involuntary eye movement leading to image shifts that are picked up by well-known motion detectors in the early visual system. This view can have further consequences for our understanding of how the human visual system usually compensates for eye movements, in order to let us perceive a stable world despite continuous image shifts generated by gaze instability.  相似文献   

12.
Biber U  Ilg UJ 《PloS one》2011,6(1):e16265
Eye movements create an ever-changing image of the world on the retina. In particular, frequent saccades call for a compensatory mechanism to transform the changing visual information into a stable percept. To this end, the brain presumably uses internal copies of motor commands. Electrophysiological recordings of visual neurons in the primate lateral intraparietal cortex, the frontal eye fields, and the superior colliculus suggest that the receptive fields (RFs) of special neurons shift towards their post-saccadic positions before the onset of a saccade. However, the perceptual consequences of these shifts remain controversial. We wanted to test in humans whether a remapping of motion adaptation occurs in visual perception.The motion aftereffect (MAE) occurs after viewing of a moving stimulus as an apparent movement to the opposite direction. We designed a saccade paradigm suitable for revealing pre-saccadic remapping of the MAE. Indeed, a transfer of motion adaptation from pre-saccadic to post-saccadic position could be observed when subjects prepared saccades. In the remapping condition, the strength of the MAE was comparable to the effect measured in a control condition (33±7% vs. 27±4%). Contrary, after a saccade or without saccade planning, the MAE was weak or absent when adaptation and test stimulus were located at different retinal locations, i.e. the effect was clearly retinotopic. Regarding visual cognition, our study reveals for the first time predictive remapping of the MAE but no spatiotopic transfer across saccades. Since the cortical sites involved in motion adaptation in primates are most likely the primary visual cortex and the middle temporal area (MT/V5) corresponding to human MT, our results suggest that pre-saccadic remapping extends to these areas, which have been associated with strict retinotopy and therefore with classical RF organization. The pre-saccadic transfer of visual features demonstrated here may be a crucial determinant for a stable percept despite saccades.  相似文献   

13.
Anatomical and physiological evidence shows that the primate visual brain consists of many distributed processing systems, acting in parallel. Psychophysical studies show that the activity in each of the parallel systems reaches its perceptual end-point at a different time, thus leading to a perceptual asynchrony in vision. This, together with clinical and human imaging evidence, suggests strongly that the processing systems are also perceptual systems and that the different processing-perceptual systems can act more or less autonomously. Moreover, activity in each can have a conscious correlate without necessarily involving activity in other visual systems. This leads us to conclude not only that visual consciousness is itself modular, reflecting the basic modular organization of the visual brain, but that the binding of cellular activity in the processing-perceptual systems is more properly thought of as a binding of the consciousnesses generated by each of them. It is this binding that gives us our integrated image of the visual world.  相似文献   

14.
Little is known about mechanisms mediating a stable perception of the world during pursuit eye movements. Here, we used fMRI to determine to what extent human motion-responsive areas integrate planar retinal motion with nonretinal eye movement signals in order to discard self-induced planar retinal motion and to respond to objective ("real") motion. In?contrast to other areas, V3A lacked responses to?self-induced planar retinal motion but responded strongly to head-centered motion, even when retinally canceled by pursuit. This indicates a near-complete multimodal integration of visual with nonvisual planar motion signals in V3A. V3A could be mapped selectively and robustly in every single subject on this basis. V6 also reported head-centered planar motion, even when 3D flow was added to it, but was suppressed by retinal planar motion. These findings suggest a dominant contribution of human areas V3A and V6 to head-centered motion perception and to perceptual stability during eye movements.  相似文献   

15.
The visual system has the remarkable ability to extract several types of meaningful global-motion signals, such as radial motion, translation motion, and rotation, for different visual functions and actions. In the monkey brain, different groups of cells in MST respond best to different types of global motion [1, 2] whereas in lower cortical areas including MT, no such differential responses have been found. Here, we show that an area (or areas) lower than MST in the human brain [3] responds to different types of global motion. A series of human functional magnetic resonance imaging (fMRI) experiments, in which attention was controlled for, indicated that the center of radial motion activates the corresponding location in the V3A representation, whereas translation motion activates mainly in a more peripheral representation of V3A. These results suggest that in the human brain, V3A is an area that differentially responds according to the type of global motion.  相似文献   

16.
T Haarmeier  F Bunjes  A Lindner  E Berret  P Thier 《Neuron》2001,32(3):527-535
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion.  相似文献   

17.
The evolution of visual processing and the construction of seeing systems   总被引:3,自引:0,他引:3  
This paper is concerned with the evolution of visual mechanisms and the possibility of copying their principles at different levels of sophistication. It is an old question how the complex interaction between eye and brain evolved when each needs the other as a test-bed for successive improvements. I propose that the primitive mechanism for the separation of stationary objects relies on their relative movement against a background, normally caused by the animal's own movement. Apparently insects and many lower animals use little more than this for negotiating through a three-dimensional world, making adequate responses to individual objects which they 'see' without a cortical system or even without a large brain. In the development of higher animals such as birds or man, additional circuits store memories of the forms of objects that have been frequently inspected from all angles or handled. Simple visual systems, however, are tuned to a feature of the world by which objects separate themselves by movement relative to the eye. In making simple artificial visual systems which 'see', as distinct from merely projecting the image, it is more hopeful to copy the 'ambient' vision of lower animals than the cortical systems of birds or mammals.  相似文献   

18.
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors.  相似文献   

19.
Filling-in is a perceptual phenomenon in which a visual attribute such as colour, brightness, texture or motion is perceived in a region of the visual field even though such an attribute exists only in the surround. Filling-in dramatically reveals the dissociation between the retinal input and the percept, and raises fundamental questions about how these two relate to each other. Filling-in is observed in various situations, and is an essential part of our normal surface perception. Here, I review recent experiments examining brain activities associated with filling-in, and discuss possible neural mechanisms underlying this remarkable perceptual phenomenon. The evidence shows that neuronal activities in early visual cortical areas are involved in filling-in, providing new insights into visual cortical functions.  相似文献   

20.
《Fly》2013,7(3):209-211
A central goal of systems neuroscience is to understand how neural circuits represent quantitative aspects of the outside world and transform these signals into the motor code for behavior. By contrast to olfactory perception in which odors are encoded by a population of ligand-binding receptors at the input stage, the visual system extracts complex information about color, form and movement from just a few types of photoreceptor inputs. The algorithms for many of these transformations are poorly understood. We designed a high throughput real-time quantitative testing system, the "fly-stampede", to evaluate behavioral responses to light and motion cues in Drosophila. With this system, we identified a neural circuit that does not participate in sensing light but is crucial for computing visual motion. When neurons of this circuit are genetically inactivated, the flies show normal walking phototaxis, but are completely motion blind. Using neurogenetics to study the circuits mediating sophisticated animal behaviors is currently a field of intense study. This extra view attempts to summarize our work within historical background of fly biocybernetics and other recent advances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号