首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

2.
The supplementary eye field (SEF) is a region within medial frontal cortex that integrates complex visuospatial information and controls eye-head gaze shifts. Here, we test if the SEF encodes desired gaze directions in a simple retinal (eye-centered) frame, such as the superior colliculus, or in some other, more complex frame. We electrically stimulated 55 SEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. Each stimulation site specified a specific spatial goal when plotted in its intrinsic frame. These intrinsic frames varied site by site, in a continuum from eye-, to head-, to space/body-centered coding schemes. This variety of coding schemes provides the SEF with a unique potential for implementing arbitrary reference frame transformations.  相似文献   

3.
Choi WY  Guitton D 《Neuron》2006,50(3):491-505
A prominent hypothesis in motor control is that endpoint errors are minimized because motor commands are updated in real time via internal feedback loops. We investigated in monkey whether orienting saccadic gaze shifts made in the dark with coordinated eye-head movements are controlled by feedback. We recorded from superior colliculus fixation neurons (SCFNs) that fired tonically during fixation and were silent during gaze shifts. When we briefly (相似文献   

4.
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.  相似文献   

5.
Horizontal displacements of gaze in cats with unrestrained head were studied using the magnetic search coil method. Three types of eye-head coordination were found when cats oriented gaze towards visual targets. Maximal velocities of gaze, head and eye movements in orbits depend linearily on amplitudes of their displacements in the range of up to 20 degrees. Gaze velocity reached its top level in about 0.3 of complete time of movement execution. Data support the idea of saccadic-vestibular summation during coordinated eye-head movements in cats.  相似文献   

6.
Limb movement is smooth and corrections of movement trajectory and amplitude are barely noticeable midflight. This suggests that skeletomuscular motor commands are smooth in transition, such that the rate of change of acceleration (or jerk) is minimized. Here we applied the methodology of minimum-jerk submovement decomposition to a member of the skeletomuscular family, the head movement. We examined the submovement composition of three types of horizontal head movements generated by nonhuman primates: head-alone tracking, head-gaze pursuit, and eye-head combined gaze shifts. The first two types of head movements tracked a moving target, whereas the last type oriented the head with rapid gaze shifts toward a target fixed in space. During head tracking, the head movement was composed of a series of episodes, each consisting of a distinct, bell-shaped velocity profile (submovement) that rarely overlapped with each other. There was no specific magnitude order in the peak velocities of these submovements. In contrast, during eye-head combined gaze shifts, the head movement was often comprised of overlapping submovements, in which the peak velocity of the primary submovement was always higher than that of the subsequent submovement, consistent with the two-component strategy observed in goal-directed limb movements. These results extend the previous submovement composition studies from limb to head movements, suggesting that submovement composition provides a biologically plausible approach to characterizing the head motor recruitment that can vary depending on task demand.  相似文献   

7.
The double magnetic induction (DMI) method has successfully been used to record head-unrestrained gaze shifts in human subjects (Bremen et?al., J Neurosci Methods 160:75-84, 2007a, J Neurophysiol, 98:3759-3769, 2007b). This method employs a small golden ring placed on the eye that, when positioned within oscillating magnetic fields, induces orientation-dependent voltages in a pickup coil in front of the eye. Here we develop and test a streamlined calibration routine for use with experimental animals, in particular, with monkeys. The calibration routine requires the animal solely to accurately follow visual targets presented at random locations in the visual field. Animals can readily learn this task. In addition, we use the fact that the pickup coil can be fixed rigidly and reproducibly on implants on the animal's skull. Therefore, accumulation of calibration data leads to increasing accuracy. As a first step, we simulated gaze shifts and the resulting DMI signals. Our simulations showed that the complex DMI signals can be effectively calibrated with the use of random target sequences, which elicit substantial decoupling of eye- and head orientations in a natural way. Subsequently, we tested our paradigm on three macaque monkeys. Our results show that the data for a successful calibration can be collected in a single recording session, in which the monkey makes about 1,500-2,000 goal-directed saccades. We obtained a resolution of 30 arc minutes (measurement range [-60,+60]°). This resolution compares to the fixation resolution of the monkey's oculomotor system, and to the standard scleral search-coil method.  相似文献   

8.
The relationship of retrobulbar hematomas to vision in cynomolgus monkeys   总被引:1,自引:0,他引:1  
An experimental model has been developed to measure the effect of retrobulbar hematomas on functional vision in cynomolgus monkeys. In this model, functional vision was quantitated using flashed evoked visual potentials in five monkeys following creation of retrobulbar hematomas. In one monkey used as a control, functional vision remained impaired for 180 minutes following induction of retinal ischemia by increased intraorbital pressure. In two monkeys in which increased intraorbital pressure was relieved by anterior chamber paracentesis following 15 minutes of retinal ischemia, flashed evoked visual potential promptly returned to baseline level. In two additional monkeys in which increased intraorbital pressure was relieved following 30 minutes of retinal ischemia, flashed evoked visual potentials improved but never returned to baseline levels. This study demonstrates the usefulness of flashed evoked visual potentials in measuring functional vision in cynomolgus monkeys. This experimental model should prove useful in evaluating the effects of increased intraorbital pressure on functional vision and the effect of intervention on impaired vision due to retrobulbar hematomas. Further studies with larger numbers of animals are needed to clarify these preliminary studies and document longer-term effects of retinal ischemia secondary to retrobulbar hematomas.  相似文献   

9.
Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.  相似文献   

10.
The accuracy of pointing movements performed under different head positions to remembered target locations in 3-D space was studied in healthy persons. The subjects fixated a visual target, then closed their eyes and after 1.0 sec performed the targeted movement with their right arm. The target (a point light source) was presented in random order by a programmable robot arm at one of five space locations. The accuracy of pointing movements was examined in a spherical coordinate system centered in respect with the shoulder of the responding arm. The pointing movements were most accurate under natural eye-head coordination. With the head fixed in the straight-ahead position, both the 3-D absolute error and its standard deviation increased significantly. At the same time, individual components of spatial error (directional and radial) did not change significantly. With the head turned to the rightmost or leftmost position, the pointing accuracy was disturbed within larger limits than under head-fixed condition. The main contributors to the 3-D absolute error were the changes in the azimuth error. The latter depended on the direction of the head-turn: the rightmost turn either increased leftward or decreased rightward shift, and conversely, the left turn increased rightward shift or decreased leftward shift of the target-directed movements.It is suggested that the increased inaccuracy of pointing under head-fixed condition reflected the impairment of the eye-head coordination underlying gaze orientation, and increased inaccuracy under the head-turned condition may be explained by changes in the internal representation of the head and target position in space.Neirofiziologiya/Neurophysiology, Vol. 26, No. 2, pp. 122–131, March–April, 1994.  相似文献   

11.
Numerous studies have addressed the issue of where people look when they perform hand movements. Yet, very little is known about how visuomotor performance is affected by fixation location. Previous studies investigating the accuracy of actions performed in visual periphery have revealed inconsistent results. While movements performed under full visual-feedback (closed-loop) seem to remain surprisingly accurate, open-loop as well as memory-guided movements usually show a distinct bias (i.e. overestimation of target eccentricity) when executed in periphery. In this study, we aimed to investigate whether gaze position affects movements that are performed under full-vision but cannot be corrected based on a direct comparison between the hand and target position. To do so, we employed a classical visuomotor reaching task in which participants were required to move their hand through a gap between two obstacles into a target area. Participants performed the task in four gaze conditions: free-viewing (no restrictions on gaze), central fixation, or fixation on one of the two obstacles. Our findings show that obstacle avoidance behaviour is moderated by fixation position. Specifically, participants tended to select movement paths that veered away from the obstacle fixated indicating that perceptual errors persist in closed-loop vision conditions if they cannot be corrected effectively based on visual feedback. Moreover, measuring the eye-movement in a free-viewing task (Experiment 2), we confirmed that naturally participants’ prefer to move their eyes and hand to the same spatial location.  相似文献   

12.

Background

Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account.

Methodology/Principal Findings

We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side.

Conclusions

While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching.  相似文献   

13.
For humans, social cues often guide the focus of attention. Although many nonhuman primates, like humans, live in large, complex social groups, the extent to which human and nonhuman primates share fundamental mechanisms of social attention remains unexplored. Here, we show that, when viewing a rhesus macaque looking in a particular direction, both rhesus macaques and humans reflexively and covertly orient their attention in the same direction. Specifically, when performing a peripheral visual target detection task, viewing a monkey with either its eyes alone or with both its head and eyes averted to one side facilitated the detection of peripheral targets when they randomly appeared on the same side. Moreover, viewing images of a monkey with averted gaze evoked small but systematic shifts in eye position in the direction of gaze in the image. The similar magnitude and temporal dynamics of response facilitation and eye deviation in monkeys and humans suggest shared neural circuitry mediating social attention.  相似文献   

14.
Wilkie RM  Wann JP 《Current biology : CB》2002,12(23):2014-2017
We have the ability to locomote at high speeds, and we usually negotiate bends safely, even when visual information is degraded, for example, when driving at night. There are three sources of visual information that could support successful steering. An observer fixating a steering target that is eccentric to the current heading must rotate their gaze. The gaze rotation may be detected by using head and eye movement signals (extra-retinal direction: ERD) or their retinal counterpart, visual direction (VD). The gaze rotation also transforms the global retinal flow (RF) field, which may enable direct steering judgments. In this study, we manipulate VD and RF to determine their contribution toward steering a curved path in the presence of ERD. The results suggest a model that uses a weighted combination of all three information sources, but results also suggest that this weighting may change in reduced visibility, such as in low-light conditions.  相似文献   

15.
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.  相似文献   

16.
We examined the performance of a dynamic neural network that replicates much of the psychophysics and neurophysiology of eye–head gaze shifts without relying on gaze feedback control. For example, our model generates gaze shifts with ocular components that do not exceed 35° in amplitude, whatever the size of the gaze shifts (up to 75° in our simulations), without relying on a saturating nonlinearity to accomplish this. It reproduces the natural patterns of eye–head coordination in that head contributions increase and ocular contributions decrease together with the size of gaze shifts and this without compromising the accuracy of gaze realignment. It also accounts for the dependence of the relative contributions of the eyes and the head on the initial positions of the eyes, as well as for the position sensitivity of saccades evoked by electrical stimulation of the superior colliculus. Finally, it shows why units of the saccadic system could appear to carry gaze-related signals even if they do not operate within a gaze control loop and do not receive head-related information.  相似文献   

17.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (30°, 75° and 150°) were examined in three groups in order to vary the task difficulty. The results showed that the 30° group gradually reduced direction errors of reaching with practice and adapted well to the visuomotor rotation. The 75° group made large direction errors of reaching, and the 150° group applied a 180° reversal shift from early practice. The 75°and 150° groups, however, overcompensated the respective rotations at the end of practice. Despite these group differences in adaptive changes of reaching, all groups gradually adapted gaze directions prior to reaching from the target area to the areas related to the final positions of reaching during the course of practice. The adaptive changes of both hand and eye movements in all groups mainly reflected adjustments of movement directions based on explicit knowledge of the applied rotation acquired through practice. Only the 30° group showed small implicit adaptation in both effectors. The results suggest that by adapting gaze directions from the target to the final position of reaching based on explicit knowledge of the visuomotor rotation, the oculomotor system supports the limb-motor system to make precise preplanned adjustments of reaching directions during learning of visuomotor rotation under terminal visual feedback.  相似文献   

18.
Kaiser M  Lappe M 《Neuron》2004,41(2):293-300
Saccadic eye movements transiently distort perceptual space. Visual objects flashed shortly before or during a saccade are mislocalized along the saccade direction, resembling a compression of space around the saccade target. These mislocalizations reflect transient errors of processes that construct spatial stability across eye movements. They may arise from errors of reference signals associated with saccade direction and amplitude or from visual or visuomotor remapping processes focused on the saccade target's position. The second case would predict apparent position shifts toward the target also in directions orthogonal to the saccade. We report that such orthogonal mislocalization indeed occurs. Surprisingly, however, the orthogonal mislocalization is restricted to only part of the visual field. This part comprises distant positions in saccade direction but does not depend on the target's position. Our findings can be explained by a combination of directional and positional reference signals that varies in time course across the visual field.  相似文献   

19.
Classification of neural signals at the single-trial level and the study of their relevance in affective and cognitive neuroscience are still in their infancy. Here we investigated the neurophysiological correlates of conditions of increasing social scene complexity using 3D human models as targets of attention, which may also be important in autism research. Challenging single-trial statistical classification of EEG neural signals was attempted for detection of oddball stimuli with increasing social scene complexity. Stimuli had an oddball structure and were as follows: 1) flashed schematic eyes, 2) simple 3D faces flashed between averted and non-averted gaze (only eye position changing), 3) simple 3D faces flashed between averted and non-averted gaze (head and eye position changing), 4) animated avatar alternated its gaze direction to the left and to the right (head and eye position), 5) environment with 4 animated avatars all of which change gaze and one of which is the target of attention. We found a late (> 300 ms) neurophysiological oddball correlate for all conditions irrespective of their complexity as assessed by repeated measures ANOVA. We attempted single-trial detection of this signal with automatic classifiers and obtained a significant balanced accuracy classification of around 79%, which is noteworthy given the amount of scene complexity. Lateralization analysis showed a specific right lateralization only for more complex realistic social scenes. In sum, complex ecological animations with social content elicit neurophysiological events which can be characterized even at the single-trial level. These signals are right lateralized. These finding paves the way for neuroscientific studies in affective neuroscience based on complex social scenes, and given the detectability at the single trial level this suggests the feasibility of brain computer interfaces that can be applied to social cognition disorders such as autism.  相似文献   

20.
本文通过目标运动引起的眼-头运动协同的实验,测量和分析了头部运动的动态特性来探讨其头部运动的控制机制。研究结果揭示了眼-头协同的注视运动中头部运动的双重模式控制机制:在小幅度运动范围是线性比例控制,在大幅度运动范围是使用最大作用力的Bang-Bang开关控制。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号