首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye–arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye–arm–hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.  相似文献   

2.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (30°, 75° and 150°) were examined in three groups in order to vary the task difficulty. The results showed that the 30° group gradually reduced direction errors of reaching with practice and adapted well to the visuomotor rotation. The 75° group made large direction errors of reaching, and the 150° group applied a 180° reversal shift from early practice. The 75°and 150° groups, however, overcompensated the respective rotations at the end of practice. Despite these group differences in adaptive changes of reaching, all groups gradually adapted gaze directions prior to reaching from the target area to the areas related to the final positions of reaching during the course of practice. The adaptive changes of both hand and eye movements in all groups mainly reflected adjustments of movement directions based on explicit knowledge of the applied rotation acquired through practice. Only the 30° group showed small implicit adaptation in both effectors. The results suggest that by adapting gaze directions from the target to the final position of reaching based on explicit knowledge of the visuomotor rotation, the oculomotor system supports the limb-motor system to make precise preplanned adjustments of reaching directions during learning of visuomotor rotation under terminal visual feedback.  相似文献   

3.
We can adapt movements to a novel dynamic environment (e.g., tool use, microgravity, and perturbation) by acquiring an internal model of the dynamics. Although multiple environments can be learned simultaneously if each environment is experienced with different limb movement kinematics, it is controversial as to whether multiple internal models for a particular movement can be learned and flexibly retrieved according to behavioral contexts. Here, we address this issue by using a novel visuomotor task. While participants reached to each of two targets located at a clockwise or counter-clockwise position, a gradually increasing visual rotation was applied in the clockwise or counter-clockwise direction, respectively, to the on-screen cursor representing the unseen hand position. This procedure implicitly led participants to perform physically identical pointing movements irrespective of their intentions (i.e., movement plans) to move their hand toward two distinct visual targets. Surprisingly, if each identical movement was executed according to a distinct movement plan, participants could readily adapt these movements to two opposing force fields simultaneously. The results demonstrate that multiple motor memories can be learned and flexibly retrieved, even for physically identical movements, according to distinct motor plans in a visual space.  相似文献   

4.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task. Young adults made aiming movements to targets on a horizontal plane, while looking at the rotated feedback (cursor) of hand movements on a monitor. To vary the task difficulty, three rotation angles (30°, 75°, and 150°) were tested in three groups. All groups shortened hand movement time and trajectory length with practice. However, control strategies used were different among groups. The 30° group used proportionately more implicit adjustments of hand movements than other groups. The 75° group used more on-line feedback control, whereas the 150° group used explicit strategic adjustments. Regarding eye-hand coordination, timing of gaze shift to the target was gradually changed with practice from the late to early phase of hand movements in all groups, indicating an emerging gaze-anchoring behavior. Gaze locations prior to the gaze anchoring were also modified with practice from the cursor vicinity to an area between the starting position and the target. Reflecting various task difficulties, these changes occurred fastest in the 30° group, followed by the 75° group. The 150° group persisted in gazing at the cursor vicinity. These results suggest that the function of gaze control during visuomotor adaptation changes from a reactive control for exploring the relation between cursor and hand movements to a predictive control for guiding the hand to the task goal. That gaze-anchoring behavior emerged in all groups despite various control strategies indicates a generality of this adaptive pattern for eye-hand coordination in goal-directed actions.  相似文献   

5.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

6.
Patients with optic ataxia (OA), who are missing the caudal portion of their superior parietal lobule (SPL), have difficulty performing visually-guided reaches towards extra-foveal targets. Such gaze and hand decoupling also occurs in commonly performed non-standard visuomotor transformations such as the use of a computer mouse. In this study, we test two unilateral OA patients in conditions of 1) a change in the physical location of the visual stimulus relative to the plane of the limb movement, 2) a cue that signals a required limb movement 180° opposite to the cued visual target location, or 3) both of these situations combined. In these non-standard visuomotor transformations, the OA deficit is not observed as the well-documented field-dependent misreach. Instead, OA patients make additional eye movements to update hand and goal location during motor execution in order to complete these slow movements. Overall, the OA patients struggled when having to guide centrifugal movements in peripheral vision, even when they were instructed from visual stimuli that could be foveated. We propose that an intact caudal SPL is crucial for any visuomotor control that involves updating ongoing hand location in space without foveating it, i.e. from peripheral vision, proprioceptive or predictive information.  相似文献   

7.
The impulse discharges of neurons in the inferior parietal association cortex (area 7) were studied in the alert, behaving rhesus monkey, trained to fixate and follow visual targets. Four classes of cells related to visual or visuomotor function were found. Cells of one of these are sensitive to visual stimuli and have large, contralateral receptive fields with maximal sensitivity in the far temporal quadrants. Cells of the other three classes are related to visuomotor functions: visual fixation, tracking, and saccades. They are neither sensory nor motor in the usual sense for they are activated only by interested fixation of gaze or tracking, or before visually evoked saccadic eye movements. They are not activated during the spontaneous saccades and fixations that the monkey makes while casually exploring his environment. It is hypothesized that the light-sensitive neurons provide the visual input to the visuomotor cells that, in turn, produce a command signal for the direction of visual attention and for shifting the focus of attention from one target to another.  相似文献   

8.
Most object manipulation tasks involve a series of actions demarcated by mechanical contact events, and gaze is usually directed to the locations of these events as the task unfolds. Typically, gaze foveates the target 200 ms in advance of the contact. This strategy improves manual accuracy through visual feedback and the use of gaze-related signals to guide the hand/object. Many studies have investigated eye-hand coordination in experimental and natural tasks; most of them highlighted a strong link between eye movements and hand or object kinematics. In this experiment, we analyzed gaze strategies in a collision task but in a very challenging dynamical context. Participants performed collisions while they were exposed to alternating episodes of microgravity, hypergravity and normal gravity. First, by isolating the effects of inertia in microgravity, we found that peak hand acceleration marked the transition between two modes of grip force control. Participants exerted grip forces that paralleled load force profiles, and then increased grip up to a maximum shifted after the collision. Second, we found that the oculomotor strategy adapted visual feedback of the controlled object around the collision, as demonstrated by longer durations of fixation after collision in new gravitational environments. Finally, despite large variability of arm dynamics in altered gravity, we found that saccades were remarkably time-locked to the peak hand acceleration in all conditions. In conclusion, altered gravity allowed light to be shed on predictive mechanisms used by the central nervous system to coordinate gaze, hand and grip motor actions during a mixed task that involved transport of an object and high impact loads.  相似文献   

9.
This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions.  相似文献   

10.
Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to “spy” on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.  相似文献   

11.
Training has been shown to improve perceptual performance on limited sets of stimuli. However, whether training can generally improve top-down biasing of visual search in a target-nonspecific manner remains unknown. We trained subjects over ten days on a visual search task, challenging them with a novel target (top-down goal) on every trial, while bottom-up uncertainty (distribution of distractors) remained constant. We analyzed the changes in saccade statistics and visual behavior over the course of training by recording eye movements as subjects performed the task. Subjects became experts at this task, with twofold increased performance, decreased fixation duration, and stronger tendency to guide gaze toward items with color and spatial frequency (but not necessarily orientation) that resembled the target, suggesting improved general top-down biasing of search.  相似文献   

12.
Pesaran B  Nelson MJ  Andersen RA 《Neuron》2006,51(1):125-134
When reaching to grasp an object, we often move our arm and orient our gaze together. How are these movements coordinated? To investigate this question, we studied neuronal activity in the dorsal premotor area (PMd) and the medial intraparietal area (area MIP) of two monkeys while systematically varying the starting position of the hand and eye during reaching. PMd neurons encoded the relative position of the target, hand, and eye. MIP neurons encoded target location with respect to the eye only. These results indicate that whereas MIP encodes target locations in an eye-centered reference frame, PMd uses a relative position code that specifies the differences in locations between all three variables. Such a relative position code may play an important role in coordinating hand and eye movements by computing their relative position.  相似文献   

13.
Neurons in posterior parietal cortex of the awake, trained monkey respond to passive visual and/or somatosensory stimuli. In general, the receptive fields of these cells are large and nonspecific. When these neurons are studied during visually guided hand movements and eye movements, most of their activity can be accounted for by passive sensory stimulation. However, for some visual cells, the response to a stimulus is enhanced when it is to be the target for a saccadic eye movement. This enhancement is selective for eye movements into the visual receptive field since it does not occur with eye movements to other parts of the visual field. Cells that discharge in association with a visual fixation task have foveal receptive fields and respond to the spots of light used as fixation targets. Cells discharging selectively in association with different directions of tracking eye movements have directionally selective responses to moving visual stimuli. Every cell in our sample discharging in association with movement could be driven by passive sensory stimuli. We conclude that the activity of neurons in posterior parietal cortex is dependent on and indicative of external stimuli but not predictive of movement.  相似文献   

14.
The introduction of non-target objects into a workspace leads to temporal and spatial adjustments of reaching trajectories towards a target. If the non-target is obstructing the path of the hand towards the target, the reach is adjusted such that collision with the non-target, or obstacle, is avoided. Little is known about the influence of features which are irrelevant for the execution of the movement on avoidance movements, like color similarity between target and non-target objects. In eye movement studies the similarity of non-targets has been revealed to influence oculomotor competition. Because of the tight neural and behavioral coupling between the gaze and reaching system, our aim was to determine the contribution of similarity between target and non-target to avoidance movements. We performed 2 experiments in which participants had to reach to grasp a target object while a non-target was present in the workspace. These non-targets could be either similar or dissimilar in color to the target. The results indicate that the non-spatial feature similarity can further modify the avoidance response and therefore further modify the spatial path of the reach. Indeed, we find that dissimilar pairs have a stronger effect on reaching-to-grasp movements than similar pairs. This effect was most pronounced when the non-target was on the outside of the reaching hand, where it served as more of an obstacle to the trailing arm. We propose that the increased capture of attention by the dissimilar obstacle is responsible for the more robust avoidance response.  相似文献   

15.
We investigated how visual attentional resources are allocated during reaching movements. Particularly, this study examined whether or not the direction of the reaching movement affected visual attention resource allocation. Participants held a stylus pen to reach their hand toward a target stimulus on a graphics tablet as quickly and accurately as possible. The direction of the hand movement was either from near to far space or the reverse. They observed visual stimuli and a cursor, which represented the hand position, on a perpendicularly positioned display, instead of directly seeing their hand movements. Regardless of the movement direction, the participants tended with quickly responding to the target stimuli located far from the start position as compared with those located near to the start position. These results led us to conclude that attentional resources were preferentially allocated in the areas far from the start position of reaching movements. These findings may provide important information for basic research on attention, but also contribute to a decrease of human errors in manipulation tasks performed with visual feedback on visual display terminals.  相似文献   

16.
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.  相似文献   

17.
Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.  相似文献   

18.
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.  相似文献   

19.
Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.  相似文献   

20.
The aim of this study was to clarify the nature of visual processing deficits caused by cerebellar disorders. We studied the performance of two types of visual search (top-down visual scanning and bottom-up visual scanning) in 18 patients with pure cerebellar types of spinocerebellar degeneration (SCA6: 11; SCA31: 7). The gaze fixation position was recorded with an eye-tracking device while the subjects performed two visual search tasks in which they looked for a target Landolt figure among distractors. In the serial search task, the target was similar to the distractors and the subject had to search for the target by processing each item with top-down visual scanning. In the pop-out search task, the target and distractor were clearly discernible and the visual salience of the target allowed the subjects to detect it by bottom-up visual scanning. The saliency maps clearly showed that the serial search task required top-down visual attention and the pop-out search task required bottom-up visual attention. In the serial search task, the search time to detect the target was significantly longer in SCA patients than in normal subjects, whereas the search time in the pop-out search task was comparable between the two groups. These findings suggested that SCA patients cannot efficiently scan a target using a top-down attentional process, whereas scanning with a bottom-up attentional process is not affected. In the serial search task, the amplitude of saccades was significantly smaller in SCA patients than in normal subjects. The variability of saccade amplitude (saccadic dysmetria), number of re-fixations, and unstable fixation (nystagmus) were larger in SCA patients than in normal subjects, accounting for a substantial proportion of scattered fixations around the items. Saccadic dysmetria, re-fixation, and nystagmus may play important roles in the impaired top-down visual scanning in SCA, hampering precise visual processing of individual items.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号