首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigated the neural mechanisms underlying visual localization in 3-D space in area V1 of behaving monkeys. Three different sources of information, retinal disparity, viewing distance and gaze direction, that participate in these neural mechanisms are being reviewed. The way they interact with each other is studied by combining retinal and extraretinal signals. Interactions between retinal disparity and viewing distance have been shown in foveal V1; we have observed a strong modulation of the spontaneous activity and of the visual response of most V1 cells that was highly correlated with the vergence angle. As a consequence of these gain effects, neural horizontal disparity coding is favoured or refined for particular distances of fixation. Changing the gaze direction in the fronto-parallel plane also produces strong gains in the visual response of half of the cells in foveal V1. Cells tested for horizontal disparity and orientation selectivities show gain effects that occur coherently for the same spatial coordinates of the eyes. Shifts in preferred disparity also occurred in several neurons. Cells tested in calcarine V1 at retinal eccentricities larger than 10 degrees , show that horizontal disparity is encoded at least up to 20 degrees around both the horizontal and vertical meridians. At these large retinal eccentricities we found that vertical disparity is also encoded with tuning profiles similar to those of horizontal disparity coding. Combinations of horizontal and vertical disparity signals show that most cells encode both properties. In fact the expression of horizontal disparity coding depends on the vertical disparity signals that produce strong gain effects and frequent changes in peak selectivities. We conclude that the vertical disparity signal and the eye position signal serve to disambiguate the horizontal disparity signal to provide information on 3-D spatial coordinates in terms of distance, gaze direction and retinal eccentricity. We suggest that the relative weight among these different signals is the determining factor involved in the neural processing that gives information on 3-D spatial localization.  相似文献   

2.
Although it is generally accepted that visual information guides steering, it is still unclear whether a curvature matching strategy or a ‘look where you are going’ strategy is used while steering through a curved road. The current experiment investigated to what extent the existing models for curve driving also apply to cycling around a curve, and tested the influence of cycling speed on steering and gaze behavior. Twenty-five participants were asked to cycle through a semicircular lane three consecutive times at three different speeds while staying in the center of the lane. The observed steering behavior suggests that an anticipatory steering strategy was used at curve entrance and a compensatory strategy was used to steer through the actual bend of the curve. A shift of gaze from the center to the inside edge of the lane indicates that at low cycling speed, the ‘look where you are going’ strategy was preferred, while at higher cycling speeds participants seemed to prefer the curvature matching strategy. Authors suggest that visual information from both steering strategies contributes to the steering system and can be used in a flexible way. Based on a familiarization effect, it can be assumed that steering is not only guided by vision but that a short-term learning component should also be taken into account.  相似文献   

3.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

4.
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker’s visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body’s trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker’s instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body’s momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.  相似文献   

5.
Land MF  Tatler BW 《Current biology : CB》2001,11(15):1215-1220
We studied the eye movements of a racing driver during high-speed practice to see whether he took in visual information in a different way from a normal driver on a winding road [1, 2]. We found that, when cornering, he spent most of the time looking close to, but not exactly at, the tangent points on the inside edges of the bends. Each bend was treated slightly differently, and there was a highly repeatable pattern to the way the track edge was viewed throughout each bend. We also found a very close relationship between the driver's head direction and the rate of rotation of the car 1 s later. We interpret these observations as indicating that the driver's gaze is not driven directly by tangent point location, as it is in ordinary driving. Instead, we propose that his head direction is driven by the same information that he uses to control steering and speed, namely his knowledge of the track and his racing line round it. If he directs his head at an angle proportional to his estimate of car rotation speed, this will automatically bring his head roughly into line with the tangent points of the bends. From this standardized position, he can use the expected movements of the tangent points in his field of view to verify, and if necessary modify, his racing line during the following second.  相似文献   

6.
For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that-like the tangential cells-NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information.  相似文献   

7.
Authié CN  Mestre DR 《PloS one》2012,7(2):e31479
Many experimental approaches to the control of steering rely on the tangent point (TP) as major source of information. The TP is a good candidate to control self-motion. It corresponds to a singular and salient point in the subject's visual field, and its location depends on the road geometry, the direction of self-motion relative to the road and the position of the driver on the road. However, the particular status of the TP in the optical flow, as a local minimum of flow speed, has often been left aside. We therefore assume that the TP is actually an optimal location in the dynamic optical array to perceive a change in the trajectory curvature. In this study, we evaluated the ability of human observers to detect variations in their path curvature from optical flow patterns, as a function of their gaze direction in a virtual environment. We simulated curvilinear self-motion parallel to a ground plane. Using random-dot optic flow stimuli of brief duration and a two-alternative forced-choice adaptive procedure, we determined path curvature discrimination thresholds, as a function of gaze direction. The discrimination thresholds are minimal for a gaze directed toward a local minimum of optical flow speed. A model based on Weber fraction of the foveal velocities (ΔV/V) correctly predicts the relationship between experimental thresholds and local flow velocities. This model was also tested for an optical flow computation integrating larger circular areas in central vision. Averaging the flow over five degrees leads to an even better fit of the model to experimental thresholds. We also found that the minimal optical flow speed direction corresponds to a maximal sensitivity of the visual system, as predicted by our model. The spontaneous gazing strategies observed during driving might thus correspond to an optimal selection of relevant information in the optical flow field.  相似文献   

8.
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.  相似文献   

9.
The ability to follow gaze (i.e. head and eye direction) has recently been shown for social mammals, particularly primates. In most studies, individuals could use gaze direction as a behavioural cue without understanding that the view of others may be different from their own. Here, we show that hand-raised ravens not only visually co-orient with the look-ups of a human experimenter but also reposition themselves to follow the experimenter's gaze around a visual barrier. Birds were capable of visual co-orientation already as fledglings but consistently tracked gaze direction behind obstacles not before six months of age. These results raise the possibility that sub-adult and adult ravens can project a line of sight for the other person into the distance. To what extent ravens may attribute mental significance to the visual behaviour of others is discussed.  相似文献   

10.
Optomotor flight control in houseflies shows bandwidth fractionation such that steering responses to an oscillating large-field rotating panorama peak at low frequency, whereas responses to small-field objects peak at high frequency. In fruit flies, steady-state large-field translation generates steering responses that are three times larger than large-field rotation. Here, we examine the optomotor steering reactions to dynamically oscillating visual stimuli consisting of large-field rotation, large-field expansion, and small-field motion. The results show that, like in larger flies, large-field optomotor steering responses peak at low frequency, whereas small-field responses persist under high frequency conditions. However, in fruit flies large-field expansion elicits higher magnitude and tighter phase-locked optomotor responses than rotation throughout the frequency spectrum, which may suggest a further segregation within the large-field pathway. An analysis of wing beat frequency and amplitude reveals that mechanical power output during flight varies according to the spatial organization and motion dynamics of the visual scene. These results suggest that, like in larger flies, the optomotor control system is organized into parallel large-field and small-field pathways, and extends previous analyses to quantify expansion-sensitivity for steering reflexes and flight power output across the frequency spectrum.  相似文献   

11.
Sensing is often implicitly assumed to be the passive acquisition of information. However, part of the sensory information is generated actively when animals move. For instance, humans shift their gaze actively in a sequence of saccades towards interesting locations in a scene. Likewise, many insects shift their gaze by saccadic turns of body and head, keeping their gaze fixed between saccades. Here we employ a novel panoramic virtual reality stimulator and show that motion computation in a blowfly visual interneuron is tuned to make efficient use of the characteristic dynamics of retinal image flow. The neuron is able to extract information about the spatial layout of the environment by utilizing intervals of stable vision resulting from the saccadic viewing strategy. The extraction is possible because the retinal image flow evoked by translation, containing information about object distances, is confined to low frequencies. This flow component can be derived from the total optic flow between saccades because the residual intersaccadic head rotations are small and encoded at higher frequencies. Information about the spatial layout of the environment can thus be extracted by the neuron in a computationally parsimonious way. These results on neuronal function based on naturalistic, behaviourally generated optic flow are in stark contrast to conclusions based on conventional visual stimuli that the neuron primarily represents a detector for yaw rotations of the animal.  相似文献   

12.
F Mars  J Navarro 《PloS one》2012,7(8):e43858
Current theories on the role of visuomotor coordination in driving agree that active sampling of the road by the driver informs the arm-motor system in charge of performing actions on the steering wheel. Still under debate, however, is the nature of visual cues and gaze strategies used by drivers. In particular, the tangent point hypothesis, which states that drivers look at a specific point on the inside edge line, has recently become the object of controversy. An alternative hypothesis proposes that drivers orient gaze toward the desired future path, which happens to be often situated in the vicinity of the tangent point. The present study contributed to this debate through the analyses of the distribution of gaze orientation with respect to the tangent point. The results revealed that drivers sampled the roadway in the close vicinity of the tangent point rather than the tangent point proper. This supports the idea that drivers look at the boundary of a safe trajectory envelop near the inside edge line. Furthermore, the study investigated for the first time the reciprocal influence of manual control on gaze control in the context of driving. This was achieved through the comparison of gaze behavior when drivers actively steered the vehicle or when steering was performed by an automatic controller. The results showed an increase in look-ahead fixations in the direction of the bend exit and a small but consistent reduction in the time spent looking in the area of the tangent point when steering was passive. This may be the consequence of a change in the balance between cognitive and sensorimotor anticipatory gaze strategies. It might also reflect bidirectional coordination control between the eye and arm-motor systems, which goes beyond the common assumption that the eyes lead the hands when driving.  相似文献   

13.
Changes in flight direction in flying insects are largely due to roll, yaw and pitch rotations of their body. Head orientation is stabilized for most of the time by counter rotation. Here, we use high-speed video to analyse head- and body-movements of the bumblebee Bombus terrestris while approaching and departing from a food source located between three landmarks in an indoor flight-arena. The flight paths consist of almost straight flight segments that are interspersed with rapid turns. These short and fast yaw turns (“saccades”) are usually accompanied by even faster head yaw turns that change gaze direction. Since a large part of image rotation is thereby reduced to brief instants of time, this behavioural pattern facilitates depth perception from visual motion parallax during the intersaccadic intervals. The detailed analysis of the fine structure of the bees’ head turning movements shows that the time course of single head saccades is very stereotypical. We find a consistent relationship between the duration, peak velocity and amplitude of saccadic head movements, which in its main characteristics resembles the so-called "saccadic main sequence" in humans. The fact that bumblebee head saccades are highly stereotyped as in humans, may hint at a common principle, where fast and precise motor control is used to reliably reduce the time during which the retinal images moves.  相似文献   

14.
Human observers perceive illusory rotations after the disappearance of circularly repeating patches containing dark-to-light luminance. This afterimage rotation is a very powerful phenomenon, but little is known about the mechanisms underlying it. Here, we use a computational model to show that the afterimage rotation can be explained by a combination of fast light adaptation and the physiological architecture of the early visual system, consisting of ON- and OFF-type visual pathways. In this retinal ON/OFF model, the afterimage rotation appeared as a rotation of focus lines of retinal ON/OFF responses. Focus lines rotated clockwise on a light background, but counterclockwise on a dark background. These findings were consistent with the results of psychophysical experiments, which were also performed by us. Additionally, the velocity of the afterimage rotation was comparable with that observed in our psychophysical experiments. These results suggest that the early visual system (including the retina) is responsible for the generation of the afterimage rotation, and that this illusory rotation may be systematically misinterpreted by our high-level visual system.  相似文献   

15.
Meliza CD  Dan Y 《Neuron》2006,49(2):183-189
Experience-dependent plasticity of visual cortical receptive fields (RFs) involves synaptic modifications in the underlying neural circuits, but the site and mechanism of these modifications remain to be elucidated. Using in vivo whole-cell recordings, we show that pairing visual stimulation at a given retinal location with spiking of a single neuron in developing rat visual cortex induces rapid RF modifications. The time course of the response to the visual stimulus at the paired RF location is altered, with an enhancement of the response preceding the spike time and a reduction following the spike. Such bidirectional modification is consistent with spike timing-dependent plasticity. Response modification also occurs at nearby locations, the direction and magnitude of which are correlated with the change at the paired location. In addition, changes at unpaired locations show a negative correlation with the initial strength of the response, which may facilitate rapid modification of the spatial RF profile.  相似文献   

16.
Close behavioural coupling of visual orientation may provide a range of adaptive benefits to social species. In order to investigate the natural properties of gaze-following between pedestrians, we displayed an attractive stimulus in a frequently trafficked corridor within which a hidden camera was placed to detect directed gaze from passers-by. The presence of visual cues towards the stimulus by nearby pedestrians increased the probability of passers-by looking as well. In contrast to cueing paradigms used for laboratory research, however, we found that individuals were more responsive to changes in the visual orientation of those walking in the same direction in front of them (i.e. viewing head direction from behind). In fact, visual attention towards the stimulus diminished when oncoming pedestrians had previously looked. Information was therefore transferred more effectively behind, rather than in front of, gaze cues. Further analyses show that neither crowding nor group interactions were driving these effects, suggesting that, within natural settings gaze-following is strongly mediated by social interaction and facilitates acquisition of environmentally relevant information.  相似文献   

17.
Eye contact is a crucial social cue constituting a frequent preliminary to interaction. Thus, the perception of others' gaze may be associated with specific processes beginning with asymmetries in the detection of direct versus averted gaze. We tested this hypothesis in two behavioural experiments using realistic eye stimuli in a visual search task. We manipulated the head orientation (frontal or deviated) and the visual field (right or left) in which the target appeared at display onset. We found that direct gaze targets presented among averted gaze distractors were detected faster and better than averted gaze targets among direct gaze distractors, but only when the head was deviated. Moreover, direct gaze targets were detected very quickly and efficiently regardless of head orientation and visual field, whereas the detection of averted gaze was strongly modulated by these factors. These results suggest that gaze contact has precedence over contextual information such as head orientation and visual field.  相似文献   

18.
Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.  相似文献   

19.
Felsen G  Mainen ZF 《Neuron》2008,60(1):137-148
Deciding in which direction to move is a ubiquitous feature of animal behavior, but the neural substrates of locomotor choices are not well understood. The superior colliculus (SC) is a midbrain structure known to be important for controlling the direction of gaze, particularly when guided by visual or auditory cues, but which may play a more general role in behavior involving spatial orienting. To test this idea, we recorded and manipulated activity in the SC of freely moving rats performing an odor-guided spatial choice task. In this context, not only did a substantial majority of SC neurons encode choice direction during goal-directed locomotion, but many also predicted the upcoming choice and maintained selectivity for it after movement completion. Unilateral inactivation of SC activity profoundly altered spatial choices. These results indicate that the SC processes information necessary for spatial locomotion, suggesting a broad role for this structure in sensory-guided orienting and navigation.  相似文献   

20.
Blake R  Sobel KV  Gilroy LA 《Neuron》2003,39(5):869-878
When the visual system is faced with conflicting or ambiguous stimulus information, visual perception fluctuates over time. We found that perceptual alternations are slowed when inducing stimuli move within the visual field, constantly engaging fresh, unadapted neural tissue. During binocular rivalry, dominance durations were longer when rival figures moved compared to when they were stationary, yielding lower alternation rates. Rate was not reduced, however, when observers tracked the moving targets, keeping the images on approximately the same retinal area. Alternations were reliably triggered when rival targets passed through a local region of the visual field preadapted to one of the rival targets. During viewing of a kinetic globe whose direction of rotation was ambiguous, observers experienced fewer alternations in perceived direction when the globe moved around the visual field or when the globe's axis of rotation changed continuously. Evidently, local neural adaptation is a key ingredient in the instability of perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号