首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Zhang T  Heuer HW  Britten KH 《Neuron》2004,42(6):993-1001
The ventral intraparietal area (VIP) is a multimodal parietal area, where visual responses are brisk, directional, and typically selective for complex optic flow patterns. VIP thus could provide signals useful for visual estimation of heading (self-motion direction). A central problem in heading estimation is how observers compensate for eye velocity, which distorts the retinal motion cues upon which perception depends. To find out if VIP could be useful for heading, we measured its responses to simulated trajectories, both with and without eye movements. Our results showed that most VIP neurons very strongly signal heading direction. Furthermore, the tuning of most VIP neurons was remarkably stable in the presence of eye movements. This stability was such that the population of VIP neurons represented heading very nearly in head-centered coordinates. This makes VIP the most robust source of such signals yet described, with properties ideal for supporting perception.  相似文献   

2.
T Haarmeier  F Bunjes  A Lindner  E Berret  P Thier 《Neuron》2001,32(3):527-535
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion.  相似文献   

3.
When small flying insects go off their intended course, they use the resulting pattern of motion on their eye, or optic flow, to guide corrective steering. A change in heading generates a unique, rotational motion pattern and a change in position generates a translational motion pattern, and each produces corrective responses in the wingbeats. Any image in the flow field can signal rotation, but owing to parallax, only the images of nearby objects can signal translation. Insects that fly near the ground might therefore respond more strongly to translational optic flow that occurs beneath them, as the nearby ground will produce strong optic flow. In these experiments, rigidly tethered fruitflies steered in response to computer-generated flow fields. When correcting for unintended rotations, flies weight the motion in their upper and lower visual fields equally. However, when correcting for unintended translations, flies weight the motion in the lower visual fields more strongly. These results are consistent with the interpretation that fruitflies stabilize by attending to visual areas likely to contain the strongest signals during natural flight conditions.  相似文献   

4.
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.  相似文献   

5.
The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.  相似文献   

6.
Visual evoked potentials (VEPs) to the onset of motion of visual patterns and brain responses associated with saccadic eye movements (SRPs) were compared in human subjects and in rhesus monkeys. Three different velocities of pattern motion were employed. In humans, brain responses were recorded from six scalp areas. In monkeys, transcortical recordings were obtained from chronically implanted electrodes in the occipital, temporo-parietal, and frontal areas. In humans there was a clear difference in VEPs to the pattern motion between the anterior (Fz, Cz) and posterior (Pz, Oz) scalp regions. The earliest component was a positive peak at 85 ms at Oz followed by a negativity around 110 ms. In the fronto-central leads the VEP was characterized by a negativity at 145 ms and a subsequent broad positive component around 250 ms. SRP responses differed in the early components from the VEPs to pattern motion but a good correspondence was found in the morphology of the late components of the two types of brain potentials. Furthermore, flashed-on VEPs and SRPs elicited a late positivity of more pronounced amplitude than VEPs to pattern displacement. In monkeys similar findings were found: an early negative component of the pattern-displacement VEP could not be observed in the SRP responses over the visual cortex while the late portion of the SRP waveform was greater than the late positivity of the VEP to motion-onset.  相似文献   

7.
Observers moving through a three-dimensional environment can use optic flow to determine their direction of heading. Existing heading algorithms use cartesian flow fields in which image flow is the displacement of image features over time. I explore a heading algorithm that uses affine flow instead. The affine flow at an image feature is its displacement modulo an affine transformation defined by its neighborhood. Modeling the observer's instantaneous motion by a translation and a rotation about an axis through its eye, affine flow is tangent to the translational field lines on the observer's viewing sphere. These field lines form a radial flow field whose center is the direction of heading. The affine flow heading algorithm has characteristics that can be used to determine whether the human visual system relies on it. The algorithm is immune to observer rotation and arbitrary affine transformations of its input images; its accuracy improves with increasing variation in environmental depth; and it cannot recover heading in an environment consisting of a single plane because affine flow vanishes in this case. Translational field lines can also be approximated through differential cartesian motion. I compare the performance of heading algorithms based on affine flow, differential cartesian flow, and least-squares search.  相似文献   

8.
Within biologically constrained models of heading and complex motion processing, localization of the center-of-motion (COM) is typically an implicit property arising from the precise computation of radial motion direction associated with an observers forward self-motion. In the work presented here we report psychophysical data from a motion-impaired stroke patient, GZ, whose pattern of visual motion deficits is inconsistent with this view. We show that while GZ is able to discriminate direction in circular motions she is unable to discriminate direction in radial motion patterns. GZs inability to discriminate radial motion is in stark contrast with her ability to localize the COM in such stimuli and suggests that recovery of the COM does not necessarily require an explicit representation of radial motion direction. We propose that this dichotomy can be explained by a circular template mechanism that minimizes a global motion error relative to the visual motion input, and we demonstrate that a sparse population of such templates is computationally sufficient to account for human psychophysical performance in general and in particular, explains GZs performance. Recent re-analysis of the predicted receptive field structures in several existing heading models provides additional support for this type of circular template mechanism and suggests the human visual system may have available circular motion mechanisms for heading estimation.  相似文献   

9.
In contradistinction to conventional wisdom, we propose that retinal image slip of a visual scene (optokinetic pattern, OP) does not constitute the only crucial input for visually induced percepts of self-motion (vection). Instead, the hypothesis is investigated that there are three input factors: 1) OP retinal image slip, 2) motion of the ocular orbital shadows across the retinae, and 3) smooth pursuit eye movements (efference copy). To test this hypothesis, we visually induced percepts of sinusoidal rotatory self-motion (circular vection, CV) in the absence of vestibular stimulation. Subjects were presented with three concurrent stimuli: a large visual OP, a fixation point to be pursued with the eyes (both projected in superposition on a semi-circular screen), and a dark window frame placed close to the eyes to create artificial visual field boundaries that simulate ocular orbital rim boundary shadows, but which could be moved across the retinae independent from eye movements. In different combinations these stimuli were independently moved or kept stationary. When moved together (horizontally and sinusoidally around the subject's head), they did so in precise temporal synchrony at 0.05 Hz. The results show that the occurrence of CV requires retinal slip of the OP and/or relative motion between the orbital boundary shadows and the OP. On the other hand, CV does not develop when the two retinal slip signals equal each other (no relative motion) and concur with pursuit eye movements (as it is the case, e.g., when we follow with the eyes the motion of a target on a stationary visual scene). The findings were formalized in terms of a simulation model. In the model two signals coding relative motion between OP and head are fused and fed into the mechanism for CV, a visuo-oculomotor one, derived from OP retinal slip and eye movement efference copy, and a purely visual signal of relative motion between the orbital rims (head) and the OP. The latter signal is also used, together with a version of the oculomotor efference copy, for a mechanism that suppresses CV at a later stage of processing in conditions in which the retinal slip signals are self-generated by smooth pursuit eye movements.  相似文献   

10.
11.
Investigation of perceptual rivalry between conflicting stimuli presented one to each eye can further understanding of the neural underpinnings of conscious visual perception. During rivalry, visual awareness fluctuates between perceptions of the two stimuli. Here, we demonstrate that high-level perceptual grouping can promote rivalry between stimulus pairs that would otherwise be perceived as nonrivalrous. Perceptual grouping was generated with point-light walker stimuli that simulate human motion, visible only as lights placed on the joints. Although such walking figures are unrecognizable when stationary, recognition judgments as complex as gender and identity can accurately be made from animated displays, demonstrating the efficiency with which our visual system can group dynamic local signals into a globally coherent walking figure. We find that point-light walker stimuli presented one to each eye and in different colors and configurations results in strong rivalry. However, rivalry is minimal when the two walkers are split between the eyes or both presented to one eye. This pattern of results suggests that processing animated walker figures promotes rivalry between signals from the two eyes rather than between higher-level representations of the walkers. This leads us to hypothesize that awareness during binocular rivalry involves the integrated activity of high-level perceptual mechanisms in conjunction with lower-level ocular suppression modulated via cortical feedback.  相似文献   

12.
Even if a stimulus pattern moves at a constant velocity across the receptive field of motion-sensitive neurons, such as lobula plate tangential cells (LPTCs) of flies, the response amplitude modulates over time. The amplitude of these response modulations is related to local pattern properties of the moving retinal image. On the one hand, pattern-dependent response modulations have previously been interpreted as 'pattern-noise', because they deteriorate the neuron's ability to provide unambiguous velocity information. On the other hand, these modulations might also provide the system with valuable information about the textural properties of the environment. We analyzed the influence of the size and shape of receptive fields by simulations of four versions of LPTC models consisting of arrays of elementary motion detectors of the correlation type (EMDs). These models have previously been suggested to account for many aspects of LPTC response properties. Pattern-dependent response modulations decrease with an increasing number of EMDs included in the receptive field of the LPTC models, since spatial changes within the visual field are smoothed out by the summation of spatially displaced EMD responses. This effect depends on the shape of the receptive field, being the more pronounced--for a given total size--the more elongated the receptive field is along the direction of motion. Large elongated receptive fields improve the quality of velocity signals. However, if motion signals need to be localized the velocity coding is only poor but the signal provides--potentially useful--local pattern information. These modelling results suggest that motion vision by correlation type movement detectors is subject to uncertainty: you cannot obtain both an unambiguous and a localized velocity signal from the output of a single cell. Hence, the size and shape of receptive fields of motion sensitive neurons should be matched to their potential computational task.  相似文献   

13.
Visual illusions are valuable tools for the scientific examination of the mechanisms underlying perception. In the peripheral drift illusion special drift patterns appear to move although they are static. During fixation small involuntary eye movements generate retinal image slips which need to be suppressed for stable perception. Here we show that the peripheral drift illusion reveals the mechanisms of perceptual stabilization associated with these micromovements. In a series of experiments we found that illusory motion was only observed in the peripheral visual field. The strength of illusory motion varied with the degree of micromovements. However, drift patterns presented in the central (but not the peripheral) visual field modulated the strength of illusory peripheral motion. Moreover, although central drift patterns were not perceived as moving, they elicited illusory motion of neutral peripheral patterns. Central drift patterns modulated illusory peripheral motion even when micromovements remained constant. Interestingly, perceptual stabilization was only affected by static drift patterns, but not by real motion signals. Our findings suggest that perceptual instabilities caused by fixational eye movements are corrected by a mechanism that relies on visual rather than extraretinal (proprioceptive or motor) signals, and that drift patterns systematically bias this compensatory mechanism. These mechanisms may be revealed by utilizing static visual patterns that give rise to the peripheral drift illusion, but remain undetected with other patterns. Accordingly, the peripheral drift illusion is of unique value for examining processes of perceptual stabilization.  相似文献   

14.
On the basis of Hanada and Ejimas (2000) model, an algorithmic model was presented to explain psychophysical data of van den Berg and Beintema (2000) that are inconsistent with vector-subtractive compensation for the rotational flow. The earlier model was modified in order not to use vector-subtractive compensation for the rotational flow. The proposed model computes the center of flow first and then estimates self-rotation; finally, heading is recovered from the center of flow and the estimate of self-rotation. The model explains the data of van de Berg and Beintema (2000). A fusion model of rotation estimates from different sources (efferent signals, proprioceptive feedback, vestibular signals about eye and head rotation, and visual motion) was also presented.  相似文献   

15.
Whether fundamental visual attributes, such as color, motion, and shape, are analyzed separately in specialized pathways has been one of the central questions of visual neuroscience. Although recent studies have revealed various forms of cross-attribute interactions, including significant contributions of color signals to motion processing, it is still widely believed that color perception is relatively independent of motion processing. Here, we report a new color illusion, motion-induced color mixing, in which moving bars, the color of each of which alternates between two colors (e.g., red and green), are perceived as the mixed color (e.g., yellow) even though the two colors are never superimposed on the retina. The magnitude of color mixture is significantly stronger than that expected from direction-insensitive spatial integration of color signals. This illusion cannot be ascribed to optical image blurs, including those induced by chromatic aberration, or to involuntary eye movements of the observer. Our findings indicate that color signals are integrated not only at the same retinal location, but also along a motion trajectory. It is possible that this neural mechanism helps us to see veridical colors for moving objects by reducing motion blur, as in the case of luminance-based pattern perception.  相似文献   

16.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

17.
 In motion-processing areas of the visual cortex in cats and monkeys, an anisotropic distribution of direction selectivities displays a preference for movements away from the fovea. This ‘centrifugal bias’ has been hypothetically linked to the processing of optic flow fields generated during forward locomotion. In this paper, we show that flow fields induced on the retina in many natural situations of locomotion of higher mammals are indeed qualitatively centrifugal in structure, even when biologically plausible eye movements to stabilize gaze on environmental targets are performed. We propose a network model of heading detection that carries an anisotropy similar to the one found in cat and monkey. In simulations, this model reproduces a number of psychophysical results of human heading detection. It suggests that a recently reported human disability to correctly identify the direction of heading from optic flow when a certain type of eye movement is simulated might be linked to the noncentrifugal structure of the resulting retinal flow field and to the neurophysiological anisotropies. Received: 1 April 1994/Accepted in revised form: 4 August 1994  相似文献   

18.
Zanker JM 《Spatial Vision》2004,17(1-2):75-94
Arts history tells an exciting story about repeated attempts to represent features that are crucial for the understanding of our environment and which, at the same time, go beyond the inherently two-dimensional nature of a flat painting surface: depth and motion. In the twentieth century, Op artists such as Bridget Riley began to experiment with simple black and white patterns that do not represent motion in an artistic way but actually create vivid dynamic illusions in static pictures. The cause of motion illusions in such paintings is still a matter of debate. The role of involuntary eye movements in this phenomenon is studied here with a computational approach. The possible consequences of shifting the retinal image of synthetic wave gratings, dubbed as 'riloids', were analysed by a two-dimensional array of motion detectors (2DMD model), which generates response maps representing the spatial distribution of motion signals generated by such a stimulus. For a two-frame sequence reflecting a saccadic displacement, these motion signal maps contain extended patches in which local directions change only little. These directions, however, do not usually precisely correspond to the direction of pattern displacement that can be expected from the geometry of the curved gratings as an instance of the so-called 'aperture problem'. The patchy structure of the simulated motion detector response to the displacement of riloids resembles the motion illusion, which is not perceived as a coherent shift of the whole pattern but as a wobbling and jazzing of ill-defined regions. Although other explanations are not excluded, this might support the view that the puzzle of Op Art motion illusions could potentially have an almost trivial solution in terms of small involuntary eye movement leading to image shifts that are picked up by well-known motion detectors in the early visual system. This view can have further consequences for our understanding of how the human visual system usually compensates for eye movements, in order to let us perceive a stable world despite continuous image shifts generated by gaze instability.  相似文献   

19.
Visual systems are typically selective in their response to movement. This attribute facilitates the identification of functionally important motion events. Here we show that the complex push-up display produced by male Jacky dragons ( Amphibolurus muricatus) is likely to have been shaped by an interaction between typical signalling conditions and the sensory properties of receivers. We use novel techniques to define the structure of the signal and of a range of typical moving backgrounds in terms of direction, speed, acceleration and sweep area. Results allow us to estimate the relative conspicuousness of each motor pattern in the stereotyped sequence of which displays are composed. The introductory tail-flick sweeps a large region of the visual field, is sustained for much longer than other components, and has velocity characteristics that ensure it will not be filtered in the same way as wind-blown vegetation. These findings are consistent with the idea that the tail-flick has an alerting function. Quantitative analyses of movement-based signals can hence provide insights into sensory processes, which should facilitate identification of the selective forces responsible for structure. Results will complement the detailed models now available to account for the design of static visual signals.  相似文献   

20.
Our understanding of how the visual system processes motion transparency, the phenomenon by which multiple directions of motion are perceived to coexist in the same spatial region, has grown considerably in the past decade. There is compelling evidence that the process is driven by global-motion mechanisms. Consequently, although transparently moving surfaces are readily segmented over an extended space, the visual system cannot separate two motion signals that coexist in the same local region. A related issue is whether the visual system can detect transparently moving surfaces simultaneously or whether the component signals encounter a serial 'bottleneck' during their processing. Our initial results show that, at sufficiently short stimulus durations, observers cannot accurately detect two superimposed directions; yet they have no difficulty in detecting one pattern direction in noise, supporting the serial-bottleneck scenario. However, in a second experiment, the difference in performance between the two tasks disappears when the component patterns are segregated. This discrepancy between the processing of transparent and non-overlapping patterns may be a consequence of suppressed activity of global-motion mechanisms when the transparent surfaces are presented in the same depth plane. To test this explanation, we repeated our initial experiment while separating the motion components in depth. The marked improvement in performance leads us to conclude that transparent motion signals are represented simultaneously.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号