首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

2.
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.  相似文献   

3.
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.  相似文献   

4.
Choi WY  Guitton D 《Neuron》2006,50(3):491-505
A prominent hypothesis in motor control is that endpoint errors are minimized because motor commands are updated in real time via internal feedback loops. We investigated in monkey whether orienting saccadic gaze shifts made in the dark with coordinated eye-head movements are controlled by feedback. We recorded from superior colliculus fixation neurons (SCFNs) that fired tonically during fixation and were silent during gaze shifts. When we briefly (相似文献   

5.
Limb movement is smooth and corrections of movement trajectory and amplitude are barely noticeable midflight. This suggests that skeletomuscular motor commands are smooth in transition, such that the rate of change of acceleration (or jerk) is minimized. Here we applied the methodology of minimum-jerk submovement decomposition to a member of the skeletomuscular family, the head movement. We examined the submovement composition of three types of horizontal head movements generated by nonhuman primates: head-alone tracking, head-gaze pursuit, and eye-head combined gaze shifts. The first two types of head movements tracked a moving target, whereas the last type oriented the head with rapid gaze shifts toward a target fixed in space. During head tracking, the head movement was composed of a series of episodes, each consisting of a distinct, bell-shaped velocity profile (submovement) that rarely overlapped with each other. There was no specific magnitude order in the peak velocities of these submovements. In contrast, during eye-head combined gaze shifts, the head movement was often comprised of overlapping submovements, in which the peak velocity of the primary submovement was always higher than that of the subsequent submovement, consistent with the two-component strategy observed in goal-directed limb movements. These results extend the previous submovement composition studies from limb to head movements, suggesting that submovement composition provides a biologically plausible approach to characterizing the head motor recruitment that can vary depending on task demand.  相似文献   

6.
Horizontal displacements of gaze in cats with unrestrained head were studied using the magnetic search coil method. Three types of eye-head coordination were found when cats oriented gaze towards visual targets. Maximal velocities of gaze, head and eye movements in orbits depend linearily on amplitudes of their displacements in the range of up to 20 degrees. Gaze velocity reached its top level in about 0.3 of complete time of movement execution. Data support the idea of saccadic-vestibular summation during coordinated eye-head movements in cats.  相似文献   

7.
本文通过目标运动引起的眼-头运动协同的实验,测量和分析了头部运动的动态特性来探讨其头部运动的控制机制。研究结果揭示了眼-头协同的注视运动中头部运动的双重模式控制机制:在小幅度运动范围是线性比例控制,在大幅度运动范围是使用最大作用力的Bang-Bang开关控制。  相似文献   

8.
9.
LR Bremner  RA Andersen 《Neuron》2012,75(2):342-351
Competing models of sensorimotor computation predict different topological constraints in the brain. Some models propose population coding of particular reference frames in anatomically distinct nodes, whereas others require no such dedicated subpopulations and instead predict that regions will simultaneously code in multiple, intermediate, reference frames. Current empirical evidence is conflicting, partly due to difficulties involved in identifying underlying reference frames. Here, we independently varied the locations of hand, gaze, and target over many positions while recording from the dorsal aspect of parietal area 5. We find that the target is?represented in a predominantly hand-centered reference frame here, contrasting with the relative code seen in dorsal premotor cortex and the mostly gaze-centered reference frame in the parietal reach region. This supports the hypothesis that different nodes of the sensorimotor circuit contain distinct and systematic representations, and this constrains the types of computational model that are neurobiologically relevant.  相似文献   

10.
Identifying protein-coding regions in DNA sequences is an active issue in computational biology. In this study, we present a self adaptive spectral rotation (SASR) approach, which visualizes coding regions in DNA sequences, based on investigation of the Triplet Periodicity property, without any preceding training process. It is proposed to help with the rough coding regions prediction when there is no extra information for the training required by other outstanding methods. In this approach, at each position in the DNA sequence, a Fourier spectrum is calculated from the posterior subsequence. Following the spectrums, a random walk in complex plane is generated as the SASR's graphic output. Applications of the SASR on real DNA data show that patterns in the graphic output reveal locations of the coding regions and the frame shifts between them: arcs indicate coding regions, stable points indicate non-coding regions and corners' shapes reveal frame shifts. Tests on genomic data set from Saccharomyces Cerevisiae reveal that the graphic patterns for coding and non-coding regions differ to a great extent, so that the coding regions can be visually distinguished. Meanwhile, a time cost test shows that the SASR can be easily implemented with the computational complexity of O(N).  相似文献   

11.
Whenever we shift our gaze, any location information encoded in the retinocentric reference frame that is predominant in the visual system is obliterated. How is spatial memory retained across gaze changes? Two different explanations have been proposed: Retinocentric information may be transformed into a gaze-invariant representation through a mechanism consistent with gain fields observed in parietal cortex, or retinocentric information may be updated in anticipation of the shift expected with every gaze change, a proposal consistent with neural observations in LIP. The explanations were considered incompatible with each other, because retinocentric update is observed before the gaze shift has terminated. Here, we show that a neural dynamic mechanism for coordinate transformation can also account for retinocentric updating. Our model postulates an extended mechanism of reference frame transformation that is based on bidirectional mapping between a retinocentric and a body-centered representation and that enables transforming multiple object locations in parallel. The dynamic coupling between the two reference frames generates a shift of the retinocentric representation for every gaze change. We account for the predictive nature of the observed remapping activity by using the same kind of neural mechanism to generate an internal representation of gaze direction that is predictively updated based on corollary discharge signals. We provide evidence for the model by accounting for a series of behavioral and neural experimental observations.  相似文献   

12.
We have reviewed evidence that suggests that the target for limb motion is encoded in a retinocentric frame of reference. Errors in pointing that are elicited by an illusion that distorts the perceived motion of a target are strongly correlated with errors in gaze position. The modulations in the direction and speed of ocular smooth pursuit and of the hand show remarkable similarities, even though the inertia of the arm is much larger than that of the eye. We have suggested that ocular motion is constrained so that gaze provides an appropriate target signal for the hand. Finally, ocular and manual tracking deficits in patients with cerebellar ataxia are very similar. These deficits are also consistent with the idea that a gaze signal provides the target for hand motion; in some cases limb ataxia would be a consequence of optic ataxia rather than reflecting a deficit in the control of limb motion per se. These results, as well as neurophysiological data summarized here, have led us to revise a hypothesis we have previously put forth to account for the initial stages of sensorimotor transformations underlying targeted limb motions. In the original hypothesis, target location and initial arm posture were ultimately encoded in a common frame of reference tied to somatosensation, i.e. a body-centered frame of reference, and a desired change in posture was derived from the difference between the two. In our new scheme, a movement vector is derived from the difference between variables encoded in a retinocentric frame of reference. Accordingly, gaze, with its exquisite ability to stabilize a target image even under dynamic conditions, would be used as a reference signal. Consequently, this scheme would facilitate the processing of information under conditions in which the body and the target are moving relative to each other.  相似文献   

13.
We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle) of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1). When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2). This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3). Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.  相似文献   

14.

Background

A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.

Methodology/Principal Findings

We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.

Conclusions/Significance

These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.  相似文献   

15.
We examined the performance of a dynamic neural network that replicates much of the psychophysics and neurophysiology of eye–head gaze shifts without relying on gaze feedback control. For example, our model generates gaze shifts with ocular components that do not exceed 35° in amplitude, whatever the size of the gaze shifts (up to 75° in our simulations), without relying on a saturating nonlinearity to accomplish this. It reproduces the natural patterns of eye–head coordination in that head contributions increase and ocular contributions decrease together with the size of gaze shifts and this without compromising the accuracy of gaze realignment. It also accounts for the dependence of the relative contributions of the eyes and the head on the initial positions of the eyes, as well as for the position sensitivity of saccades evoked by electrical stimulation of the superior colliculus. Finally, it shows why units of the saccadic system could appear to carry gaze-related signals even if they do not operate within a gaze control loop and do not receive head-related information.  相似文献   

16.
17.
Electrophysiological recording in the anterior superior temporal sulcus (STS) of monkeys has demonstrated separate cell populations responsive to direct and averted gaze. Human functional imaging has demonstrated posterior STS activation in gaze processing, particularly in coding the intentions conveyed by gaze, but to date has provided no evidence of dissociable coding of different gaze directions. Because the spatial resolution typical of group-based fMRI studies (approximately 6-10 mm) exceeds the size of cellular patches sensitive to different facial characteristics (1-4 mm in monkeys), a more sensitive technique may be required. We therefore used fMRI adaptation, which is considered to offer superior resolution, to investigate whether the human anterior STS contains representations of different gaze directions, as suggested by non-human primate research. Subjects viewed probe faces gazing left, directly ahead, or right. Adapting to leftward gaze produced a reduction in BOLD response to left relative to right (and direct) gaze probes in the anterior STS and inferior parietal cortex; rightward gaze adaptation produced a corresponding reduction to right gaze probes. Consistent with these findings, averted gaze in the adapted direction was misidentified as direct. Our study provides the first human evidence of dissociable neural systems for left and right gaze.  相似文献   

18.
19.
The results of the Russian-Austrian space experiment Monimir, which was a part of the international space program Austromir, are presented. The characteristics of the horizontal gaze fixation reaction (hGFR) to the visual targets were studied during long-term space flights. Seven crewmembers of the space station Mir participated in our experiment. The subjects were tested four times before the flight, five times during the flight, and three to four times after landing. During the flight and after accomplishing, the characteristics of gaze fixation reaction changed regularly: the reaction time and coefficient of the gain of vestibular-ocular reflex increased; the velocities of eye-head movements increased and decreased. These changes were indicative of a disturbed control of the vestibular-ocular reflex under microgravity conditions because of variability of the vestibular input activity. The cosmonauts that had flight and non-flight professional specializations differed in strategies of their adaptation to the microgravity conditions. In the former, exposure to microgravity was accompanied by gaze hypermetry and inhibition of head movements; conversely, in the latter, the velocity of head movements increased, whereas that of saccades decreased.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号