首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Patients with optic ataxia (OA), who are missing the caudal portion of their superior parietal lobule (SPL), have difficulty performing visually-guided reaches towards extra-foveal targets. Such gaze and hand decoupling also occurs in commonly performed non-standard visuomotor transformations such as the use of a computer mouse. In this study, we test two unilateral OA patients in conditions of 1) a change in the physical location of the visual stimulus relative to the plane of the limb movement, 2) a cue that signals a required limb movement 180° opposite to the cued visual target location, or 3) both of these situations combined. In these non-standard visuomotor transformations, the OA deficit is not observed as the well-documented field-dependent misreach. Instead, OA patients make additional eye movements to update hand and goal location during motor execution in order to complete these slow movements. Overall, the OA patients struggled when having to guide centrifugal movements in peripheral vision, even when they were instructed from visual stimuli that could be foveated. We propose that an intact caudal SPL is crucial for any visuomotor control that involves updating ongoing hand location in space without foveating it, i.e. from peripheral vision, proprioceptive or predictive information.  相似文献   

2.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task. Young adults made aiming movements to targets on a horizontal plane, while looking at the rotated feedback (cursor) of hand movements on a monitor. To vary the task difficulty, three rotation angles (30°, 75°, and 150°) were tested in three groups. All groups shortened hand movement time and trajectory length with practice. However, control strategies used were different among groups. The 30° group used proportionately more implicit adjustments of hand movements than other groups. The 75° group used more on-line feedback control, whereas the 150° group used explicit strategic adjustments. Regarding eye-hand coordination, timing of gaze shift to the target was gradually changed with practice from the late to early phase of hand movements in all groups, indicating an emerging gaze-anchoring behavior. Gaze locations prior to the gaze anchoring were also modified with practice from the cursor vicinity to an area between the starting position and the target. Reflecting various task difficulties, these changes occurred fastest in the 30° group, followed by the 75° group. The 150° group persisted in gazing at the cursor vicinity. These results suggest that the function of gaze control during visuomotor adaptation changes from a reactive control for exploring the relation between cursor and hand movements to a predictive control for guiding the hand to the task goal. That gaze-anchoring behavior emerged in all groups despite various control strategies indicates a generality of this adaptive pattern for eye-hand coordination in goal-directed actions.  相似文献   

3.
The proximity of visual landmarks impacts reaching performance   总被引:3,自引:0,他引:3  
The control of goal-directed reaching movements is thought to rely upon egocentric visual information derived from the visuomotor networks of the dorsal visual pathway. However, recent research (Krigolson and Heath, 2004) suggests it is also possible to make allocentric comparisons between a visual background and a target object to facilitate reaching accuracy. Here we sought to determine if the effectiveness of these allocentric comparisons is reduced as distance between a visual background and a target object increases. To accomplish this, participants completed memory-guided reaching movements to targets presented in an otherwise empty visual background or positioned within a proximal, medial, or distal visual background. Our results indicated that the availability of a proximal or medial visual background reduced endpoint variability relative to reaches made without a visual background. Interestingly, we found that endpoint variability was not reduced when participants reached to targets framed within a distal visual background. Such findings suggest that allocentric visual information is used to facilitate reaching performance; however, the fidelity by which such cues are used appears linked to the proximity of veridical target location. Importantly, these data also suggest that information from both the dorsal and ventral visual streams can be integrated to facilitate the online control of reaching movements.  相似文献   

4.
We compared sensorimotor adaptation in the visual and the auditory modality. Subjects pointed to visual targets while receiving direct spatial information about fingertip position in the visual modality, or they pointed to visual targets while receiving indirect information about fingertip position in the visual modality, or they pointed to auditory targets while receiving indirect information about fingertip position in the auditory modality. Feedback was laterally shifted to induce adaptation, and aftereffects were tested with both target modalities and both hands. We found that aftereffects of adaptation were smaller when tested with the non-adapted hand, i.e., intermanual transfer was incomplete. Furthermore, aftereffects were smaller when tested in the non-adapted target modality, i.e., intermodal transfer was incomplete. Aftereffects were smaller following adaptation with indirect rather than direct feedback, but they were not smaller following adaptation with auditory rather than visual targets. From this we conclude that the magnitude of adaptive recalibration rather depends on the method of feedback delivery (indirect versus direct) than on the modality of feedback (visual versus auditory).  相似文献   

5.

Background

Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account.

Methodology/Principal Findings

We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side.

Conclusions

While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching.  相似文献   

6.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (30°, 75° and 150°) were examined in three groups in order to vary the task difficulty. The results showed that the 30° group gradually reduced direction errors of reaching with practice and adapted well to the visuomotor rotation. The 75° group made large direction errors of reaching, and the 150° group applied a 180° reversal shift from early practice. The 75°and 150° groups, however, overcompensated the respective rotations at the end of practice. Despite these group differences in adaptive changes of reaching, all groups gradually adapted gaze directions prior to reaching from the target area to the areas related to the final positions of reaching during the course of practice. The adaptive changes of both hand and eye movements in all groups mainly reflected adjustments of movement directions based on explicit knowledge of the applied rotation acquired through practice. Only the 30° group showed small implicit adaptation in both effectors. The results suggest that by adapting gaze directions from the target to the final position of reaching based on explicit knowledge of the visuomotor rotation, the oculomotor system supports the limb-motor system to make precise preplanned adjustments of reaching directions during learning of visuomotor rotation under terminal visual feedback.  相似文献   

7.
We can adapt movements to a novel dynamic environment (e.g., tool use, microgravity, and perturbation) by acquiring an internal model of the dynamics. Although multiple environments can be learned simultaneously if each environment is experienced with different limb movement kinematics, it is controversial as to whether multiple internal models for a particular movement can be learned and flexibly retrieved according to behavioral contexts. Here, we address this issue by using a novel visuomotor task. While participants reached to each of two targets located at a clockwise or counter-clockwise position, a gradually increasing visual rotation was applied in the clockwise or counter-clockwise direction, respectively, to the on-screen cursor representing the unseen hand position. This procedure implicitly led participants to perform physically identical pointing movements irrespective of their intentions (i.e., movement plans) to move their hand toward two distinct visual targets. Surprisingly, if each identical movement was executed according to a distinct movement plan, participants could readily adapt these movements to two opposing force fields simultaneously. The results demonstrate that multiple motor memories can be learned and flexibly retrieved, even for physically identical movements, according to distinct motor plans in a visual space.  相似文献   

8.
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.  相似文献   

9.
The central program of a targeted movement includes a component intended for to compensate for the weight of the arm; this is why the accuracy of pointing to a memorized position of the visual target in darkness depends on orientation of the moving limb in relation to the vertical axis. Transition from the vertical to the horizontal body position is accompanied by a shift of the final hand position along the body axis towards the head. We studied how pointing errors and visual localization of the target are modified due to adaptation to the horizontal body position; targeted movements to a real target were repeatedly performed during the adaptation period. Three types of experiments were performed: a basic experiment, and two different experiments with adaptation realized under somewhat dissimilar conditions. In the course of the first adaptation experiment, subjects received no visual information on the hand’s position in space, and targeted movements of the arm to a luminous target could be corrected using proprioceptive information only. With such a paradigm, the accuracy of pointing to memorized visual targets showed no adaptation-related changes. In the second adaptation experiment, subjects were allowed to continuously view a marker (a light-emitting diode taped to the fingertip). After such adaptation practice, the accuracy of pointing movements to memorized targets increased: both constant and variational errors, as well as both components of constant error (i.e.,X andY errors) significantly dropped. Testing the accuracy of visual localization of the targets by visual/verbal adjustment, performed after this adaptation experiment, showed that the pattern of errors did not change compared with that in the basic experiment. Therefore, we can conclude that sensorimotor adaptation to the horizontal position develops much more successfully when the subject obtains visual information about the working point position; such adaptation is not related to modifications in the system of visual localization of the target.  相似文献   

10.
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.  相似文献   

11.
EEG-based communication and control: speed-accuracy relationships   总被引:3,自引:0,他引:3  
People can learn to control mu (8–12 Hz) or beta (18–25 Hz) rhythm amplitude in the EEG recorded over sensorimotor cortex and use it to move a cursor to a target on a video screen. In our current EEG-based brain–computer interface (BCI) system, cursor movement is a linear function of mu or beta rhythm amplitude. In order to maximize the participant's control over the direction of cursor movement, the intercept in this equation is kept equal to the mean amplitude of recent performance. Selection of the optimal slope, or gain, which determines the magnitude of the individual cursor movements, is a more difficult problem. This study examined the relationship between gain and accuracy in a 1-dimensional EEG-based cursor movement task in which individuals select among 2 or more choices by holding the cursor at the desired choice for a fixed period of time (i.e., the dwell time). With 4 targets arranged in a vertical column on the screen, large gains favored the end targets whereas smaller gains favored the central targets. In addition, manipulating gain and dwell time within participants produces results that are in agreement with simulations based on a simple theoretical model of performance. Optimal performance occurs when correct selection of targets is uniform across position. Thus, it is desirable to remove any trend in the function relating accuracy to target position. We evaluated a controller that is designed to minimize the linear and quadratic trends in the accuracy with which participants hit the 4 targets. These results indicate that gain should be adjusted to the individual participants, and suggest that continual online gain adaptation could increase the speed and accuracy of EEG-based cursor control.  相似文献   

12.
Numerous studies have addressed the issue of where people look when they perform hand movements. Yet, very little is known about how visuomotor performance is affected by fixation location. Previous studies investigating the accuracy of actions performed in visual periphery have revealed inconsistent results. While movements performed under full visual-feedback (closed-loop) seem to remain surprisingly accurate, open-loop as well as memory-guided movements usually show a distinct bias (i.e. overestimation of target eccentricity) when executed in periphery. In this study, we aimed to investigate whether gaze position affects movements that are performed under full-vision but cannot be corrected based on a direct comparison between the hand and target position. To do so, we employed a classical visuomotor reaching task in which participants were required to move their hand through a gap between two obstacles into a target area. Participants performed the task in four gaze conditions: free-viewing (no restrictions on gaze), central fixation, or fixation on one of the two obstacles. Our findings show that obstacle avoidance behaviour is moderated by fixation position. Specifically, participants tended to select movement paths that veered away from the obstacle fixated indicating that perceptual errors persist in closed-loop vision conditions if they cannot be corrected effectively based on visual feedback. Moreover, measuring the eye-movement in a free-viewing task (Experiment 2), we confirmed that naturally participants’ prefer to move their eyes and hand to the same spatial location.  相似文献   

13.
In natural conversation, the minimal gaps and overlaps of the turns at talk indicate an accurate regulation of the timings of the turn-taking system. Here we studied how the turn-taking affects the gaze of a non-involved viewer of a two-person conversation. The subjects were presented with a video of a conversation while their eye gaze was tracked with an infrared camera. As a control, the video was presented without sound and the sound with still image of the speakers. Turns at talk directed the gaze behaviour of the viewers; the gaze followed, rather than predicted, the speakership change around the turn transition. Both visual and auditory cues presented alone also induced gaze shifts towards the speaking person, although significantly less and later than when the cues of both modalities were available. These results show that the organization of turn-taking has a strong influence on the gaze patterns of even non-involved viewers of the conversation, and that visual and auditory cues are in part redundant in guiding the viewers’ gaze.  相似文献   

14.
Close behavioural coupling of visual orientation may provide a range of adaptive benefits to social species. In order to investigate the natural properties of gaze-following between pedestrians, we displayed an attractive stimulus in a frequently trafficked corridor within which a hidden camera was placed to detect directed gaze from passers-by. The presence of visual cues towards the stimulus by nearby pedestrians increased the probability of passers-by looking as well. In contrast to cueing paradigms used for laboratory research, however, we found that individuals were more responsive to changes in the visual orientation of those walking in the same direction in front of them (i.e. viewing head direction from behind). In fact, visual attention towards the stimulus diminished when oncoming pedestrians had previously looked. Information was therefore transferred more effectively behind, rather than in front of, gaze cues. Further analyses show that neither crowding nor group interactions were driving these effects, suggesting that, within natural settings gaze-following is strongly mediated by social interaction and facilitates acquisition of environmentally relevant information.  相似文献   

15.
Electrocorticograms (ECoG) were recorded using subdural grid electrodes in forearm sensorimotor cortex of six human subjects. The subjects performed three visuomotor tasks, tracking a moving visual target with a joystick-controlled cursor; threading pieces of tubing; and pinching the fingers sequentially against the thumb. Control conditions were resting and active wrist extension. ECoGs were recorded at 14 sites in hand- and arm-sensorimotor area, functionally identified with electrical stimulation. For each behavior we computed spectral power of ECoG in each site and coherence in all pair-wise sites. In three out of six subjects, gamma-oscillations were observed when the subjects started the tasks. All subjects showed widespread power decrease in the range of 11-20 Hz and power increase in the 31-60 Hz ranges during performance of the visuomotor tasks. The changes in gamma-range power were more vigorous during the tracking and threading tasks compared with the wrist extension. Coherence analysis also showed similar task-related changes in coherence estimates. In contrast to the power changes, coherence estimates increased not only in gamma-range but also at lower frequencies during the manipulative visuomotor tasks. Paired sites with significant increases in coherence estimates were located within and between sensory and motor areas. These results support the hypothesis that coherent cortical activity may play a role in sensorimotor integration or attention.  相似文献   

16.
Humans can learn and store multiple visuomotor mappings (dual-adaptation) when feedback for each is provided alternately. Moreover, learned context cues associated with each mapping can be used to switch between the stored mappings. However, little is known about the associative learning between cue and required visuomotor mapping, and how learning generalises to novel but similar conditions. To investigate these questions, participants performed a rapid target-pointing task while we manipulated the offset between visual feedback and movement end-points. The visual feedback was presented with horizontal offsets of different amounts, dependent on the targets shape. Participants thus needed to use different visuomotor mappings between target location and required motor response depending on the target shape in order to “hit” it. The target shapes were taken from a continuous set of shapes, morphed between spiky and circular shapes. After training we tested participants performance, without feedback, on different target shapes that had not been learned previously. We compared two hypotheses. First, we hypothesised that participants could (explicitly) extract the linear relationship between target shape and visuomotor mapping and generalise accordingly. Second, using previous findings of visuomotor learning, we developed a (implicit) Bayesian learning model that predicts generalisation that is more consistent with categorisation (i.e. use one mapping or the other). The experimental results show that, although learning the associations requires explicit awareness of the cues’ role, participants apply the mapping corresponding to the trained shape that is most similar to the current one, consistent with the Bayesian learning model. Furthermore, the Bayesian learning model predicts that learning should slow down with increased numbers of training pairs, which was confirmed by the present results. In short, we found a good correspondence between the Bayesian learning model and the empirical results indicating that this model poses a possible mechanism for simultaneously learning multiple visuomotor mappings.  相似文献   

17.
We investigated the effects of spatial and temporal factors on manual localization of a visual target by measuring accuracy, precision, and bias. Spatial factors included manipulation of display as with or without distracters, with invariant or variant distracters, and with near or far distracters, respectively, in Experiments 1, 2, and 3. The target and distracters were of 1degrees dots differing only by luminance parameter; they were presented concurrently for 150 or 1000 ms while observers had to memorize the target location maintaining a fixed gaze. The observers' task was to reproduce the location of the target with a mouse cursor available 150 ms following stimuli offset. Results from all experiments showed that localization performance for a briefly exposed target was as accurate and precise as that for a long exposed target. Moreover, manipulation of spatial factors had no systematic effects on accuracy and precision except that near distracters yielded higher precision. Interestingly, localization performance was unbiased in 150 ms condition when there were distracters in the display, while being biased towards the fovea in 1000 ms condition regardless of their presence or absence. These results suggest a temporal dynamics in dominance-suppression between egocentric and exocentric cues in the construction of memory for location.  相似文献   

18.
Others’ gaze and emotional facial expression are important cues for the process of attention orienting. Here, we investigated with magnetoencephalography (MEG) whether the combination of averted gaze and fearful expression may elicit a selectively early effect of attention orienting on the brain responses to targets. We used the direction of gaze of centrally presented fearful and happy faces as the spatial attention orienting cue in a Posner-like paradigm where the subjects had to detect a target checkerboard presented at gazed-at (valid trials) or non gazed-at (invalid trials) locations of the screen. We showed that the combination of averted gaze and fearful expression resulted in a very early attention orienting effect in the form of additional parietal activity between 55 and 70 ms for the valid versus invalid targets following fearful gaze cues. No such effect was obtained for the targets following happy gaze cues. This early cue-target validity effect selective of fearful gaze cues involved the left superior parietal region and the left lateral middle occipital region. These findings provide the first evidence for an effect of attention orienting induced by fearful gaze in the time range of C1. In doing so, they demonstrate the selective impact of combined gaze and fearful expression cues in the process of attention orienting.  相似文献   

19.
F Mars  J Navarro 《PloS one》2012,7(8):e43858
Current theories on the role of visuomotor coordination in driving agree that active sampling of the road by the driver informs the arm-motor system in charge of performing actions on the steering wheel. Still under debate, however, is the nature of visual cues and gaze strategies used by drivers. In particular, the tangent point hypothesis, which states that drivers look at a specific point on the inside edge line, has recently become the object of controversy. An alternative hypothesis proposes that drivers orient gaze toward the desired future path, which happens to be often situated in the vicinity of the tangent point. The present study contributed to this debate through the analyses of the distribution of gaze orientation with respect to the tangent point. The results revealed that drivers sampled the roadway in the close vicinity of the tangent point rather than the tangent point proper. This supports the idea that drivers look at the boundary of a safe trajectory envelop near the inside edge line. Furthermore, the study investigated for the first time the reciprocal influence of manual control on gaze control in the context of driving. This was achieved through the comparison of gaze behavior when drivers actively steered the vehicle or when steering was performed by an automatic controller. The results showed an increase in look-ahead fixations in the direction of the bend exit and a small but consistent reduction in the time spent looking in the area of the tangent point when steering was passive. This may be the consequence of a change in the balance between cognitive and sensorimotor anticipatory gaze strategies. It might also reflect bidirectional coordination control between the eye and arm-motor systems, which goes beyond the common assumption that the eyes lead the hands when driving.  相似文献   

20.
 The gaze control system governs distinct gaze behaviors, including visual fixation and gaze reorientations. Transitions between these gaze behaviors are frequent and smooth in healthy individuals. This study models these gaze-behavior transitions for different numbers of gaze degrees of freedom. Eye/head gaze behaviors have twice the number of degrees of freedom as eye-only gaze behaviors. Each gaze behavior is observable in the system dynamics and is correlated with neuronal behaviors in several, coordinated neural centers, including the vestibular nuclei. The coordination among the neural centers establishes a sensorimotor state which maintains each gaze behavior. This study develops a mathematical framework for synthesizing the coordination among neural centers in gaze sensorimotor states and focuses on the role of vestibular nuclei neurons in gaze sensorimotor state transitions. Received: 17 December 1999 / Accepted in revised form: 3 May 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号