首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
In dynamic environments, it is crucial to accurately consider the timing of information. For instance, during saccades the eyes rotate so fast that even small temporal errors in relating retinal stimulation by flashed stimuli to extra-retinal information about the eyes’ orientations will give rise to substantial errors in where the stimuli are judged to be. If spatial localization involves judging the eyes’ orientations at the estimated time of the flash, we should be able to manipulate the pattern of mislocalization by altering the estimated time of the flash. We reasoned that if we presented a relevant flash within a short rapid sequence of irrelevant flashes, participants’ estimates of when the relevant flash was presented might be shifted towards the centre of the sequence. In a first experiment, we presented five bars at different positions around the time of a saccade. Four of the bars were black. Either the second or the fourth bar in the sequence was red. The task was to localize the red bar. We found that when the red bar was presented second in the sequence, it was judged to be further in the direction of the saccade than when it was presented fourth in the sequence. Could this be because the red bar was processed faster when more black bars preceded it? In a second experiment, a red bar was either presented alone or followed by two black bars. When two black bars followed it, it was judged to be further in the direction of the saccade. We conclude that the spatial localization of flashed stimuli involves judging the eye orientation at the estimated time of the flash.  相似文献   

2.
A stimulus that is flashed around the time of a saccade tends to be mislocalized in the direction of the saccade target. Our question is whether the mislocalization is related to the position of the saccade target within the image or to the gaze position at the end of the saccade. We separated the two with a visual illusion that influences the perceived distance to the target of the saccade and thus saccade endpoint without affecting the perceived position of the saccade target within the image. We asked participants to make horizontal saccades from the left to the right end of the shaft of a Müller-Lyer figure. Around the time of the saccade, we flashed a bar at one of five possible positions and asked participants to indicate its location by touching the screen. As expected, participants made shorter saccades along the fins-in (<–>) configuration than along the fins-out (>–<) configuration of the figure. The illusion also influenced the mislocalization pattern during saccades, with flashes presented with the fins-out configuration being perceived beyond flashes presented with the fins-in configuration. The difference between the patterns of mislocalization for bars flashed during the saccade for the two configurations corresponded quantitatively with a prediction based on compression towards the saccade endpoint considering the magnitude of the effect of the illusion on saccade amplitude. We conclude that mislocalization is related to the eye position at the end of the saccade, rather than to the position of the saccade target within the image.  相似文献   

3.
Previous work has demonstrated that upcoming saccades influence visual and auditory performance even for stimuli presented before the saccade is executed. These studies suggest a close relationship between saccade generation and visual/auditory attention. Furthermore, they provide support for Rizzolatti et al.'s premotor model of attention, which suggests that the same circuits involved in motor programming are also responsible for shifts in covert orienting (shifting attention without moving the eyes or changing posture). In a series of experiments, we demonstrate that saccade programming also affects tactile perception. Participants made speeded saccades to the left and right side as well as tactile discriminations of up versus down. The first experiment demonstrates that participants were reliably faster at responding to tactile stimuli near the location of upcoming saccades. In our second experiment, we had the subjects cross their hands and demonstrated that the effect occurs in visual space (rather than the early representations of touch). In our third experiment, the tactile events usually occurred on the opposite side of upcoming eye movement. We found that the benefit at the saccade target location vanished, suggesting that this shift is not obligatory but that it may be vetoed on the basis of expectation.  相似文献   

4.
Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.  相似文献   

5.
Observers made a saccade between two fixation markers while a probe was flashed sequentially at two locations on a side screen. The first probe was presented in the far periphery just within the observer''s visual field. This target was extinguished and the observers made a large saccade away from the probe, which would have left it far outside the visual field if it had still been present. The second probe was then presented, displaced from the first in the same direction as the eye movement and by about the same distance as the saccade step. Because both eyes and probes shifted by similar amounts, there was little or no shift between the first and second probe positions on the retina. Nevertheless, subjects reported seeing motion corresponding to the spatial displacement not the retinal displacement. When the second probe was presented, the effective location of the first probe lay outside the visual field demonstrating that apparent motion can be seen from a location outside the visual field to a second location inside the visual field. Recent physiological results suggest that target locations are “remapped” on retinotopic representations to correct for the effects of eye movements. Our results suggest that the representations on which this remapping occurs include locations that fall beyond the limits of the retina.  相似文献   

6.
The neural selection and control of saccades by the frontal eye field   总被引:9,自引:0,他引:9  
Recent research has provided new insights into the neural processes that select the target for and control the production of a shift of gaze. Being a key node in the network that subserves visual processing and saccade production, the frontal eye field (FEF) has been an effective area in which to monitor these processes. Certain neurons in the FEF signal the location of conspicuous or meaningful stimuli that may be the targets for saccades. Other neurons control whether and when the gaze shifts. The existence of distinct neural processes for visual selection and saccade production is necessary to explain the flexibility of visually guided behaviour.  相似文献   

7.
Learning visuomotor transformations for gaze-control and grasping   总被引:1,自引:0,他引:1  
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target’s position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.  相似文献   

8.
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.  相似文献   

9.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

10.
The goal of this study was to understand how neural networks solve the 3-D aspects of updating in the double-saccade task, where subjects make sequential saccades to the remembered locations of two targets. We trained a 3-layer, feed-forward neural network, using back-propagation, to calculate the 3-D motor error the second saccade. Network inputs were a 2-D topographic map of the direction of the second target in retinal coordinates, and 3-D vector representations of initial eye orientation and motor error of the first saccade in head-fixed coordinates. The network learned to account for all 3-D aspects of updating. Hidden-layer units (HLUs) showed retinal-coordinate visual receptive fields that were remapped across the first saccade. Two classes of HLUs emerged from the training, one class primarily implementing the linear aspects of updating using vector subtraction, the second class implementing the eye-orientation-dependent, non-linear aspects of updating. These mechanisms interacted at the unit level through gain-field-like input summations, and through the parallel "tweaking" of optimally-tuned HLU contributions to the output that shifted the overall population output vector to the correct second-saccade motor error. These observations may provide clues for the biological implementation of updating.  相似文献   

11.
A functional model of target selection in the saccadic system is presented, incorporating elements of visual processing, motor planning, and motor control. We address the integration of visual information with pre-information. which is provided by manipulating the probability that a target appears at a certain location. This integration is achieved within a dynamic representation of planned eye movement which is modeled through distributions of activation on a topographic field. Visual input evokes activation, which is also constrained by lateral interaction within the field and by preshaping input representing pre-information. The model describes target selection observable in paradigms in which visual goals are presented at more than one location. Specifically, we model the transition from averaging, where endpoints of first saccades fall between two visual target locations, to decision making, where endpoints of first saccades fall accurately onto one of two simultaneously presented visual targets. We make predictions about how metrical biases of first saccades are induced by pre-information about target locations acquired by learning. When coupled to a motor control stage, activation dynamics on the planning level contribute to stabilizing gaze under fixation conditions. The neurophysiological relevance of our functional model is discussed.  相似文献   

12.
The modular visual system of jumping spiders (Salticidae) divides characteristics such as high spatial acuity and wide-field motion detection between different pairs of eyes. A large pair of telescope-like anterior-median (AM) eyes is supported by 2-3 pairs of 'secondary' eyes, which provide almost 360 degrees of visual coverage at lower resolution. The AM retinae are moveable and can be pointed at stimuli within their range of motion, but salticids have to turn to bring targets into this frontal zone in the first place. We describe how the front-facing pair of secondary eyes (anterior lateral, AL) mediates this through a series of whole-body 'tracking saccades' in response to computer-generated stimuli. We investigated the 'response area' of the AL eyes and show a clear correspondence between the physical margins of the retina and stimulus position at the onset of the first saccade. Saccade frequency is maximal at the margin of AL and AM fields of view. Furthermore, spiders markedly increase the velocity with which higher magnitude tracking saccades are carried out. This has the effect that the time during which vision is impaired due to motion blur is kept at an almost constant low level, even during saccades of large magnitude.  相似文献   

13.

Background

The superior colliculus (SC) has been shown to play a crucial role in the initiation and coordination of eye- and head-movements. The knowledge about the function of this structure is mainly based on single-unit recordings in animals with relatively few neuroimaging studies investigating eye-movement related brain activity in humans.

Methodology/Principal Findings

The present study employed high-field (7 Tesla) functional magnetic resonance imaging (fMRI) to investigate SC responses during endogenously cued saccades in humans. In response to centrally presented instructional cues, subjects either performed saccades away from (centrifugal) or towards (centripetal) the center of straight gaze or maintained fixation at the center position. Compared to central fixation, the execution of saccades elicited hemodynamic activity within a network of cortical and subcortical areas that included the SC, lateral geniculate nucleus (LGN), occipital cortex, striatum, and the pulvinar.

Conclusions/Significance

Activity in the SC was enhanced contralateral to the direction of the saccade (i.e., greater activity in the right as compared to left SC during leftward saccades and vice versa) during both centrifugal and centripetal saccades, thereby demonstrating that the contralateral predominance for saccade execution that has been shown to exist in animals is also present in the human SC. In addition, centrifugal saccades elicited greater activity in the SC than did centripetal saccades, while also being accompanied by an enhanced deactivation within the prefrontal default-mode network. This pattern of brain activity might reflect the reduced processing effort required to move the eyes toward as compared to away from the center of straight gaze, a position that might serve as a spatial baseline in which the retinotopic and craniotopic reference frames are aligned.  相似文献   

14.
The goal of this study was to explore how a neural network could solve the updating task associated with the double-saccade paradigm, where two targets are flashed in succession and the subject must make saccades to the remembered locations of both targets. Because of the eye rotation of the saccade to the first target, the remembered retinal position of the second target must be updated if an accurate saccade to that target is to be made. We trained a three-layer, feed-forward neural network to solve this updating task using back-propagation. The network's inputs were the initial retinal position of the second target represented by a hill of activation in a 2D topographic array of units, as well as the initial eye orientation and the motor error of the saccade to the first target, each represented as 3D vectors in brainstem coordinates. The output of the network was the updated retinal position of the second target, also represented in a 2D topographic array of units. The network was trained to perform this updating using the full 3D geometry of eye rotations, and was able to produce the updated second-target position to within a 1 degrees RMS accuracy for a set of test points that included saccades of up to 70 degrees . Emergent properties in the network's hidden layer included sigmoidal receptive fields whose orientations formed distinct clusters, and predictive remapping similar to that seen in brain areas associated with saccade generation. Networks with the larger numbers of hidden-layer units developed two distinct types of units with different transformation properties: units that preferentially performed the linear remapping of vector subtraction, and units that performed the nonlinear elements of remapping that arise from initial eye orientation.  相似文献   

15.
Blinks and saccades cause transient interruptions of visual input. To investigate how such effects influence our perceptual state, we analyzed the time courses of blink and saccade rates in relation to perceptual switching in the Necker cube. Both time courses of blink and saccade rates showed peaks at different moments along the switching process. A peak in blinking rate appeared 1,000 ms prior to the switching responses. Blinks occurring around this peak were associated with subsequent switching to the preferred interpretation of the Necker cube. Saccade rates showed a peak 150 ms prior to the switching response. The direction of saccades around this peak was predictive of the perceived orientation of the Necker cube afterwards. Peak blinks were followed and peak saccades were preceded by transient parietal theta band activity indicating the changing of the perceptual interpretation. Precisely-timed blinks, therefore, can initiate perceptual switching, and precisely-timed saccades can facilitate an ongoing change of interpretation.  相似文献   

16.
On average our eyes make 3–5 saccadic movements per second when we read, although their neural mechanism is still unclear. It is generally thought that saccades help redirect the retinal fovea to specific characters and words but that actual discrimination of information only occurs during periods of fixation. Indeed, it has been proposed that there is active and selective suppression of information processing during saccades to avoid experience of blurring due to the high-speed movement. Here, using a paradigm where a string of either lexical (Chinese) or non-lexical (alphabetic) characters are triggered by saccadic eye movements, we show that subjects can discriminate both while making saccadic eye movement. Moreover, discrimination accuracy is significantly better for characters scanned during the saccadic movement to a fixation point than those not scanned beyond it. Our results showed that character information can be processed during the saccade, therefore saccades during reading not only function to redirect the fovea to fixate the next character or word but allow pre-processing of information from the ones adjacent to the fixation locations to help target the next most salient one. In this way saccades can not only promote continuity in reading words but also actively facilitate reading comprehension.  相似文献   

17.
The latent periods of saccadic eye movements in response to peripheral visual stimuli were measured in 8 right-handed healthy subjects using Posner's paradigm "COST-BENEFIT". In 6 subjects, the saccade latency in response to visual target presented in expected location in valid condition was shorter than that in neutral condition ("benefit"). Increase in saccade latency in response to the visual target presented in unexpected location in valid condition versus neutral condition took place only in 4 subjects ("cost"). A decrease in left-directed saccade latency in response to expected target presented in the left hemifield and increase in saccade latency in response to unexpected left target in comparison with analogous right-directed saccades were observed in valid condition. This phenomenon can be explained by the dominance of the right hemisphere in the processes of spatial orientation and "disengage" of attention.  相似文献   

18.
The neural mechanisms underlying the craniotopic updating of visual space across saccadic eye movements are poorly understood. Previous single-unit recording studies in primates and clinical studies in brain-damaged patients have shown that the posterior parietal cortex (PPC) has a key role in this process. In the present study, we used single-pulse transcranial magnetic stimulation (TMS) to disrupt the processing within the PPC during a task that requires craniotopic updating: double saccades. In this task, two targets are presented in quick succession and the subject is required to make a saccade to each location as accurately as possible. We show here that TMS delivered to the PPC just prior to the second saccade effectively disrupts the craniotopic coding normally observed in this task. This causes subjects to revert to saccades more consistent with a representation of the targets based on their positions relative to one another. By contrast, stimulation at earlier times between the two saccades did not disrupt performance. These results suggest that extraretinal information generated during the first perisaccadic period is not put into functional use until just prior to the second saccade.  相似文献   

19.
Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations.  相似文献   

20.
Single-unit recordings suggest that the midbrain superior colliculus (SC) acts as an optimal controller for saccadic gaze shifts. The SC is proposed to be the site within the visuomotor system where the nonlinear spatial-to-temporal transformation is carried out: the population encodes the intended saccade vector by its location in the motor map (spatial), and its trajectory and velocity by the distribution of firing rates (temporal). The neurons’ burst profiles vary systematically with their anatomical positions and intended saccade vectors, to account for the nonlinear main-sequence kinematics of saccades. Yet, the underlying collicular mechanisms that could result in these firing patterns are inaccessible to current neurobiological techniques. Here, we propose a simple spiking neural network model that reproduces the spike trains of saccade-related cells in the intermediate and deep SC layers during saccades. The model assumes that SC neurons have distinct biophysical properties for spike generation that depend on their anatomical position in combination with a center–surround lateral connectivity. Both factors are needed to account for the observed firing patterns. Our model offers a basis for neuronal algorithms for spatiotemporal transformations and bio-inspired optimal controllers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号