首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

2.
Human exhibits an anisotropy in direction perception: discrimination is superior when motion is around horizontal or vertical rather than diagonal axes. In contrast to the consistent directional anisotropy in perception, we found only small idiosyncratic anisotropies in smooth pursuit eye movements, a motor action requiring accurate discrimination of visual motion direction. Both pursuit and perceptual direction discrimination rely on signals from the middle temporal visual area (MT), yet analysis of multiple measures of MT neuronal responses in the macaque failed to provide evidence of a directional anisotropy. We conclude that MT represents different motion directions uniformly, and subsequent processing creates a directional anisotropy in pathways unique to perception. Our data support the hypothesis that, at least for visual motion, perception and action are guided by inputs from separate sensory streams. The directional anisotropy of perception appears to originate after the two streams have segregated and downstream from area MT.  相似文献   

3.
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.  相似文献   

4.
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.  相似文献   

5.
A translating eye receives a radial pattern of motion that is centered on the direction of heading. If the eye is rotating and translating, visual and extraretinal signals help to cancel the rotation and to perceive heading correctly. This involves (1) an interaction between visual and eye movement signals and (2) a motion template stage that analyzes the pattern of visual motion. Early interaction leads to motion templates that integrate head-centered motion signals in the visual field. Integration of retinal motion signals leads to late interaction. Here, we show that retinal flow limits precision of heading. This result argues against an early, vector subtraction type of interaction, but is consistent with a late, gain field type of interaction with eye velocity signals and neurophysiological findings in area MST of the monkey.  相似文献   

6.
Experimental studies have shown that responses of ventral intraparietal area (VIP) neurons specialize in head movements and the environment near the head. VIP neurons respond to visual, auditory, and tactile stimuli, smooth pursuit eye movements, and passive and active movements of the head. This study demonstrates mathematical structure on a higher organizational level created within VIP by the integration of a complete set of variables covering face-infringement. Rather than positing dynamics in an a priori defined coordinate system such as those of physical space, we assemble neuronal receptive fields to find out what space of variables VIP neurons together cover. Section 1 presents a view of neurons as multidimensional mathematical objects. Each VIP neuron occupies or is responsive to a region in a sensorimotor phase space, thus unifying variables relevant to the disparate sensory modalities and movements. Convergence on one neuron joins variables functionally, as space and time are joined in relativistic physics to form a unified spacetime. The space of position and motion together forms a neuronal phase space, bridging neurophysiology and the physics of face-infringement. After a brief review of the experimental literature, the neuronal phase space natural to VIP is sequentially characterized, based on experimental data. Responses of neurons indicate variables that may serve as axes of neural reference frames, and neuronal responses have been so used in this study. The space of sensory and movement variables covered by VIP receptive fields joins visual and auditory space to body-bound sensory modalities: somatosensation and the inertial senses. This joining of allocentric and egocentric modalities is in keeping with the known relationship of the parietal lobe to the sense of self in space and to hemineglect, in both humans and monkeys. Following this inductive step, variables are formalized in terms of the mathematics of graph theory to deduce which combinations are complete as a multidimensional neural structure that provides the organism with a complete set of options regarding objects impacting the face, such as acceptance, pursuit, and avoidance. We consider four basic variable types: position and motion of the face and of an external object. Formalizing the four types of variables allows us to generalize to any sensory system and to determine the necessary and sufficient conditions for a neural center (for example, a cortical region) to provide a face-infringement space. We demonstrate that VIP includes at least one such face-infringement space.  相似文献   

7.
T Haarmeier  F Bunjes  A Lindner  E Berret  P Thier 《Neuron》2001,32(3):527-535
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion.  相似文献   

8.
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.  相似文献   

9.
Visual perception is based on both incoming sensory signals and information about ongoing actions. Recordings from single neurons have shown that corollary discharge signals can influence visual representations in parietal, frontal and extrastriate visual cortex, as well as the superior colliculus (SC). In each of these areas, visual representations are remapped in conjunction with eye movements. Remapping provides a mechanism for creating a stable, eye-centred map of salient locations. Temporal and spatial aspects of remapping are highly variable from cell to cell and area to area. Most neurons in the lateral intraparietal area remap stimulus traces, as do many neurons in closely allied areas such as the frontal eye fields the SC and extrastriate area V3A. Remapping is not purely a cortical phenomenon. Stimulus traces are remapped from one hemifield to the other even when direct cortico-cortical connections are removed. The neural circuitry that produces remapping is distinguished by significant plasticity, suggesting that updating of salient stimuli is fundamental for spatial stability and visuospatial behaviour. These findings provide new evidence that a unified and stable representation of visual space is constructed by redundant circuitry, comprising cortical and subcortical pathways, with a remarkable capacity for reorganization.  相似文献   

10.
Lesion to the posterior parietal cortex in monkeys and humans produces spatial deficits in movement and perception. In recording experiments from area 7a, a cortical subdivision in the posterior parietal cortex in monkeys, we have found neurons whose responses are a function of both the retinal location of visual stimuli and the position of the eyes in the orbits. By combining these signals area 7 a neurons code the location of visual stimuli with respect to the head. However, these cells respond over only limited ranges of eye positions (eye-position-dependent coding). To code location in craniotopic space at all eye positions (eye-position-independent coding) an additional step in neural processing is required that uses information distributed across populations of area 7a neurons. We describe here a neural network model, based on back-propagation learning, that both demonstrates how spatial location could be derived from the population response of area 7a neurons and accurately accounts for the observed response properties of these neurons.  相似文献   

11.
It has been well established that extra-retinal information is used in the perception of visual direction and distance. Furthermore, a number of studies have established that both efference copy and afferent discharge contribute to the extra-retinal signal. Despite this, no model currently exists to explain how the signals which arise through oculomotor control contribute to perception. This paper attempts to provide such a framework. The first part of the paper outlines the framework [the cyclopean equilibrium point (EP) model] and considers the binoculus or cyclopean eye from the perspective of a current account of motor control (the EP hypothesis). An existing model is used to describe how the nervous system could utilise available efference copy and afferent extra retinal signals when determining the direction and distance of cyclopean fixation. Although the cyclopean EP model is speculative, it allows for a parsimonious framework when considering the oculomotor contribution to perception. The model has the additional advantage of being consistent with current theories regarding the control and perception of limb movement. The second part of the paper shows that the model is biologically plausible, demonstrates the use of the proposed model in describing the central control of eye movements with regard to non-conjugate peripheral adaptation and reconciles seemingly disparate empirical findings.  相似文献   

12.
Neurons in posterior parietal cortex of the awake, trained monkey respond to passive visual and/or somatosensory stimuli. In general, the receptive fields of these cells are large and nonspecific. When these neurons are studied during visually guided hand movements and eye movements, most of their activity can be accounted for by passive sensory stimulation. However, for some visual cells, the response to a stimulus is enhanced when it is to be the target for a saccadic eye movement. This enhancement is selective for eye movements into the visual receptive field since it does not occur with eye movements to other parts of the visual field. Cells that discharge in association with a visual fixation task have foveal receptive fields and respond to the spots of light used as fixation targets. Cells discharging selectively in association with different directions of tracking eye movements have directionally selective responses to moving visual stimuli. Every cell in our sample discharging in association with movement could be driven by passive sensory stimuli. We conclude that the activity of neurons in posterior parietal cortex is dependent on and indicative of external stimuli but not predictive of movement.  相似文献   

13.
Little is known about mechanisms mediating a stable perception of the world during pursuit eye movements. Here, we used fMRI to determine to what extent human motion-responsive areas integrate planar retinal motion with nonretinal eye movement signals in order to discard self-induced planar retinal motion and to respond to objective ("real") motion. In?contrast to other areas, V3A lacked responses to?self-induced planar retinal motion but responded strongly to head-centered motion, even when retinally canceled by pursuit. This indicates a near-complete multimodal integration of visual with nonvisual planar motion signals in V3A. V3A could be mapped selectively and robustly in every single subject on this basis. V6 also reported head-centered planar motion, even when 3D flow was added to it, but was suppressed by retinal planar motion. These findings suggest a dominant contribution of human areas V3A and V6 to head-centered motion perception and to perceptual stability during eye movements.  相似文献   

14.
Extracellular recordings were carried out in the visual cortex of behaving monkeys trained on a fixation/detection task, during which a target light was displayed stationary or suddenly moving on a tangent translucent screen. The responses of visual cortical cells to fast moving stimuli during steady fixation and those obtained during rapid eye movements (saccades) which moved their receptive field across a stationary stimulus, were studied. Areas V1 and V2 were explored. When tested with rapidly moving stimuli (500 deg/sec) during steady fixation, neurons in each area behaved in almost the same way. About one fourth of them were activated, the remainder showing either no response (little more than a half of them) or a reduction of the spontaneous firing rate. In both areas, some of the neurons activated during steady fixation did not respond or responded very weakly during eye motion at saccadic velocity (500 +/- 50 deg/sec). Neurons of this type, which we refer to as 'real motion' cells, could somehow contribute to the maintenance of visual stability during the execution of large eye movements.  相似文献   

15.
Reaching movements towards an object are continuously guided by visual information about the target and the arm. Such guidance increases precision and allows one to adjust the movement if the target unexpectedly moves. On-going arm movements are also influenced by motion in the surrounding. Fast responses to motion in the surrounding could help cope with moving obstacles and with the consequences of changes in one’s eye orientation and vantage point. To further evaluate how motion in the surrounding influences interceptive movements we asked subjects to tap a moving target when it reached a second, static target. We varied the direction and location of motion in the surrounding, as well as details of the stimuli that are known to influence eye movements. Subjects were most sensitive to motion in the background when such motion was near the targets. Whether or not the eyes were moving, and the direction of the background motion in relation to the direction in which the eyes were moving, had very little influence on the response to the background motion. We conclude that the responses to background motion are driven by motion near the target rather than by a global analysis of the optic flow and its relation with other information about self-motion.  相似文献   

16.
Heading direction is determined from visual and vestibular cues. Both sensory modalities have been shown to have better direction discrimination for headings near straight ahead. Previous studies of visual heading estimation have not used the full range of stimuli, and vestibular heading estimation has not previously been reported. The current experiments measure human heading estimation in the horizontal plane to vestibular, visual, and spoken stimuli. The vestibular and visual tasks involved 16 cm of platform or visual motion. The spoken stimulus was a voice command speaking a heading angle. All conditions demonstrated direction dependent biases in perceived headings such that biases increased with headings further from the fore-aft axis. The bias was larger with the visual stimulus when compared with the vestibular stimulus in all 10 subjects. For the visual and vestibular tasks precision was best for headings near fore-aft. The spoken headings had the least bias, and the variation in precision was less dependent on direction. In a separate experiment when headings were limited to ±45°, the biases were much less, demonstrating the range of headings influences perception. There was a strong and highly significant correlation between the bias curves for visual and spoken stimuli in every subject. The correlation between visual-vestibular and vestibular-spoken biases were weaker but remained significant. The observed biases in both visual and vestibular heading perception qualitatively resembled predictions of a recent population vector decoder model (Gu et al., 2010) based on the known distribution of neuronal sensitivities.  相似文献   

17.
Most neurons in cortical area MT (V5) are strongly direction selective, and their activity is closely associated with the perception of visual motion. These neurons have large receptive fields built by combining inputs with smaller receptive fields that respond to local motion. Humans integrate motion over large areas and can perceive what has been referred to as global motion. The large size and direction selectivity of MT receptive fields suggests that MT neurons may represent global motion. We have explored this possibility by measuring responses to a stimulus in which the directions of simultaneously presented local and global motion are independently controlled. Surprisingly, MT responses depended only on the local motion and were unaffected by the global motion. Yet, under similar conditions, human observers perceive global motion and are impaired in discriminating local motion. Although local motion perception might depend on MT signals, global motion perception depends on mechanisms qualitatively different from those in MT. Motion perception therefore does not depend on a single cortical area but reflects the action and interaction of multiple brain systems.  相似文献   

18.
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell''s receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow.  相似文献   

19.
Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.  相似文献   

20.
BACKGROUND: It is known that the visibility of patterns presented through stationary multiple slits is significantly improved by pattern movements. This study investigated whether this spatiotemporal pattern interpolation is supported by motion mechanisms, as opposed to the general belief that the human visual cortex initially analyses spatial patterns independent of their movements. RESULTS: Psychophysical experiments showed that multislit viewing could not be ascribed to such motion-irrelevant factors as retinal painting by tracking eye movements or an increase in the number of views by pattern movements. Pattern perception was more strongly impaired by the masking noise moving in the same direction than by the noise moving in the opposite direction, which indicates the direction selectivity of the pattern interpolation mechanism. A direction-selective impairment of pattern perception by motion adaptation also indicates the direction selectivity of the interpolation mechanism. Finally, the map of effective spatial frequencies, estimated by a reverse-correlation technique, indicates observers' perception of higher spatial frequencies, the recovery of which is theoretically impossible without the aid of motion information. CONCLUSIONS: These results provide clear evidence against the notion of separate analysis of pattern and motion. The visual system uses motion mechanisms to integrate spatial pattern information along the trajectory of pattern movement in order to obtain clear perception of moving patterns. The pattern integration mechanism is likely to be direction-selective filtering by V1 simple cells, but the integration of the local pattern information into a global figure should be guided by a higher-order motion mechanism such as MT pattern cells.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号