首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Perception–action coupling model for human locomotor pointing   总被引:1,自引:0,他引:1  
How do humans achieve the precise positioning of the feet during walking, for example, to reach the first step of a stairway? We addressed this question at the visuomotor integration level. Based on the optical specification of the required adaptation, a dynamical system model of the visuomotor control of human locomotor pointing was devised for the positioning of a foot on a visible target on the floor during walking. Visuomotor integration consists of directly linking optical information to a motor command that specifically modulates step length in accordance with the ongoing dynamics of locomotor pattern generation. The adaptation of locomotion emerges from a perception-action coupling type of control based on temporal information rather than on feedforward planning of movements. The proposed model reproduces experimental results obtained for human locomotor pointing.  相似文献   

2.
Numerous studies have investigated the effects of alcohol consumption on controlled and automatic cognitive processes. Such studies have shown that alcohol impairs performance on tasks requiring conscious, intentional control, while leaving automatic performance relatively intact. Here, we sought to extend these findings to aspects of visuomotor control by investigating the effects of alcohol in a visuomotor pointing paradigm that allowed us to separate the influence of controlled and automatic processes. Six male participants were assigned to an experimental “correction” condition in which they were instructed to point at a visual target as quickly and accurately as possible. On a small percentage of trials, the target “jumped” to a new location. On these trials, the participants’ task was to amend their movement such that they pointed to the new target location. A second group of 6 participants were assigned to a “countermanding” condition, in which they were instructed to terminate their movements upon detection of target “jumps”. In both the correction and countermanding conditions, participants served as their own controls, taking part in alcohol and no-alcohol conditions on separate days. Alcohol had no effect on participants’ ability to correct movements “in flight”, but impaired the ability to withhold such automatic corrections. Our data support the notion that alcohol selectively impairs controlled processes in the visuomotor domain.  相似文献   

3.
Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations--the mapping between actual and visual location of the hand--during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.  相似文献   

4.
How the brain constructs a coherent representation of the environment from noisy visual input remains poorly understood. Here, we explored whether awareness of the stimulus plays a role in the integration of local features into a representation of global shape. Participants were primed with a shape defined either by position or orientation cues, and performed a shape-discrimination task on a subsequently presented probe shape. Crucially, the probe could either be defined by the same or different cues as the prime, which allowed us to distinguish the effect of priming by local features and global shape. We found a robust priming benefit for visible primes, with response times being faster when the probe and prime were the same shape, regardless of the defining cue. However, rendering the prime invisible uncovered a dissociation: position-defined primes produced behavioural benefit only for probes of the same cue type. Surprisingly, orientation-defined primes afforded an enhancement only for probes of the opposite cue. In further experiments, we showed that the effect of priming was confined to retinotopic coordinates and that there was no priming effect by invisible orientation cues in an orientation-discrimination task. This explains the absence of priming by the same cue in our shape-discrimination task. In summary, our findings show that while in the absence of awareness orientation signals can recruit retinotopic circuits (e.g. intrinsic lateral connections), conscious processing is necessary to interpret local features as global shape.  相似文献   

5.
Preparing a goal directed movement often requires detailed analysis of our environment. When picking up an object, its orientation, size and relative distance are relevant parameters when preparing a successful grasp. It would therefore be beneficial if the motor system is able to influence early perception such that information processing needs for action control are met at the earliest possible stage. However, only a few studies reported (indirect) evidence for action-induced visual perception improvements. We therefore aimed to provide direct evidence for a feature-specific perceptual modulation during the planning phase of a grasping action. Human subjects were instructed to either grasp or point to a bar while simultaneously performing an orientation discrimination task. The bar could slightly change its orientation during grasping preparation. By analyzing discrimination response probabilities, we found increased perceptual sensitivity to orientation changes when subjects were instructed to grasp the bar, rather than point to it. As a control experiment, the same experiment was repeated using bar luminance changes, a feature that is not relevant for either grasping or pointing. Here, no differences in visual sensitivity between grasping and pointing were found. The present results constitute first direct evidence for increased perceptual sensitivity to a visual feature that is relevant for a certain skeletomotor act during the movement preparation phase. We speculate that such action-induced perception improvements are controlled by neuronal feedback mechanisms from cortical motor planning areas to early visual cortex, similar to what was recently established for spatial perception improvements shortly before eye movements.  相似文献   

6.
Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE) and feature integration effect which can be observed as the repetition priming effect (RPE) and feature overlap effect (FOE) depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT) and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy.  相似文献   

7.
The posterior medial frontal cortex (pMFC) is thought to play a pivotal role in enabling the control of attention during periods of distraction. In line with this view, pMFC activity is ubiquitously greater in incongruent trials of response-interference (e.g., Stroop) tasks than in congruent trials. Nonetheless, the process underlying this congruency effect remains highly controversial. We therefore sought to distinguish between two competing accounts of the congruency effect. The conflict monitoring account posits the effect indexes a process that detects conflict between competing response alternatives, which is indexed by trial-specific reaction time (RT). The time on task account posits the effect indexes a process whose recruitment increases with time on task independent of response conflict (e.g., sustained attention, arousal, effort, etc.). To distinguish between these accounts, we used functional MRI to record brain activity in twenty-four healthy adults while they performed two tasks: a response-interference task and a simple RT task with only one possible response. We reasoned that demands on a process that detects response conflict should increase with RT in the response-interference task but not in the simple RT task. In contrast, demands on a process whose recruitment increases with time on task independent of response conflict should increase with RT in both tasks. Trial-by-trial analyses revealed that pMFC activity increased with RT in both tasks. Moreover, pMFC activity increased with RT in the simple RT task enough to fully account for the congruency effect in the response-interference task. These findings appear more consistent with the time on task account of the congruency effect than with the conflict monitoring account.  相似文献   

8.
In order to determine precisely the location of a tactile stimulus presented to the hand it is necessary to know not only which part of the body has been stimulated, but also where that part of the body lies in space. This involves the multisensory integration of visual, tactile, proprioceptive, and even auditory cues regarding limb position. In recent years, researchers have become increasingly interested in the question of how these various sensory cues are weighted and integrated in order to enable people to localize tactile stimuli, as well as to give rise to the 'felt' position of our limbs, and ultimately the multisensory representation of 3-D peripersonal space. We highlight recent research on this topic using the crossmodal congruency task, in which participants make speeded elevation discrimination responses to vibrotactile targets presented to the thumb or index finger, while simultaneously trying to ignore irrelevant visual distractors presented from either the same (i.e., congruent) or a different (i.e., incongruent) elevation. Crossmodal congruency effects (calculated as performance on incongruent-congruent trials) are greatest when visual and vibrotactile stimuli are presented from the same azimuthal location, thus providing an index of common position across different sensory modalities. The crossmodal congruency task has been used to investigate a number of questions related to the representation of space in both normal participants and brain-damaged patients. In this review, we detail the major findings from this research, and highlight areas of convergence with other cognitive neuroscience disciplines.  相似文献   

9.
Numerous studies have addressed the issue of where people look when they perform hand movements. Yet, very little is known about how visuomotor performance is affected by fixation location. Previous studies investigating the accuracy of actions performed in visual periphery have revealed inconsistent results. While movements performed under full visual-feedback (closed-loop) seem to remain surprisingly accurate, open-loop as well as memory-guided movements usually show a distinct bias (i.e. overestimation of target eccentricity) when executed in periphery. In this study, we aimed to investigate whether gaze position affects movements that are performed under full-vision but cannot be corrected based on a direct comparison between the hand and target position. To do so, we employed a classical visuomotor reaching task in which participants were required to move their hand through a gap between two obstacles into a target area. Participants performed the task in four gaze conditions: free-viewing (no restrictions on gaze), central fixation, or fixation on one of the two obstacles. Our findings show that obstacle avoidance behaviour is moderated by fixation position. Specifically, participants tended to select movement paths that veered away from the obstacle fixated indicating that perceptual errors persist in closed-loop vision conditions if they cannot be corrected effectively based on visual feedback. Moreover, measuring the eye-movement in a free-viewing task (Experiment 2), we confirmed that naturally participants’ prefer to move their eyes and hand to the same spatial location.  相似文献   

10.
To identify subdivisions of the human parietal cortex, we collected fMRI data while ten subjects performed six tasks: grasping, pointing, saccades, attention, calculation, and phoneme detection. Examination of task intersections revealed a systematic anterior-to-posterior organization of activations associated with grasping only, grasping and pointing, all visuomotor tasks, attention and saccades, and saccades only. Calculation yielded two distinct activations: one unique to calculation in the bilateral anterior IPS mesial to the supramarginal gyrus and the other shared with phoneme detection in the left IPS mesial to the angular gyrus. These results suggest human homologs of the monkey areas AIP, MIP, V6A, and LIP and imply a large cortical expansion of the inferior parietal lobule correlated with the development of human language and calculation abilities.  相似文献   

11.
It has been argued that visual perception and the visual control of action depend upon functionally distinct and anatomically separable brain systems. Electrophysiological evidence indicates that binocular vision may be particularly important for the visuomotor processing within the posterior parietal cortex, and neuropsychological and psychophysical studies confirm that binocular vision is crucial for the accurate planning and control of prehension movements. An unresolved issue concerns the consequences for visuomotor processing of removing binocular vision. By one account, monocular viewing leads to reliance upon pictorial visual cues to calibrate grasping and results in disruption to normal size-constancy mechanisms. This proposal is based on the finding that maximum grip apertures are reduced with monocular vision. By a second account, monocular viewing results in the loss of binocular visual cues and leads to strategic changes in visuomotor processing by way of altered safety margins. This proposal is based on the finding that maximum grip apertures are increased with monocular vision. We measured both grip aperture and grip force during prehension movements executed with binocular and monocular viewing. We demonstrate that each of the above accounts may be correct and can be observed within the same task. Specifically, we show that, while grip apertures increase with monocular vision, consistent with altered visuomotor safety margins, maximum grip force is nevertheless reduced, consistent with a misperception of object size. These results are related to differences in visual processing required for calibrating grip aperture and grip force during reaching.  相似文献   

12.
When a visual stimulus is continuously moved behind a small stationary window, the window appears displaced in the direction of motion of the stimulus. In this study we showed that the magnitude of this illusion is dependent on (i) whether a perceptual or visuomotor task is used for judging the location of the window (ii) the directional signature of the stimulus, and (iii) whether or not there is a significant delay between the end of the visual presentation and the initiation of the localization measure. Our stimulus was a drifting sinusoidal grating windowed in space by a stationary, two-dimensional, Gaussian envelope (sigma=1 cycle of sinusoid). Localization measures were made following either a short (200 ms) or long (4.2 s) post-stimulus delay. The visuomotor localization error was up to three times greater than the perceptual error for a short delay. However, the visuomotor and perceptual localization measures were similar for a long delay. Our results provide evidence in support of the hypothesis that separate cortical pathways exist for visual perception and visually guided action and that delayed actions rely on stored perceptual information.  相似文献   

13.
Brain regions involved with processing dynamic visuomotor representational transformation are investigated using fMRI. The perceptual-motor task involved flying (or observing) a plane through a simulated Red Bull Air Race course in first person and third person chase perspective. The third person perspective is akin to remote operation of a vehicle. The ability for humans to remotely operate vehicles likely has its roots in neural processes related to imitation in which visuomotor transformation is necessary to interpret the action goals in an egocentric manner suitable for execution. In this experiment for 3(rd) person perspective the visuomotor transformation is dynamically changing in accordance to the orientation of the plane. It was predicted that 3(rd) person remote flying, over 1(st), would utilize brain regions composing the 'Mirror Neuron' system that is thought to be intimately involved with imitation for both execution and observation tasks. Consistent with this prediction differential brain activity was present for 3(rd) person over 1(st) person perspectives for both execution and observation tasks in left ventral premotor cortex, right dorsal premotor cortex, and inferior parietal lobule bilaterally (Mirror Neuron System) (Behaviorally: 1(st)>3(rd)). These regions additionally showed greater activity for flying (execution) over watching (observation) conditions. Even though visual and motor aspects of the tasks were controlled for, differential activity was also found in brain regions involved with tool use, motion perception, and body perspective including left cerebellum, temporo-occipital regions, lateral occipital cortex, medial temporal region, and extrastriate body area. This experiment successfully demonstrates that a complex perceptual motor real-world task can be utilized to investigate visuomotor processing. This approach (Aviation Cerebral Experimental Sciences ACES) focusing on direct application to lab and field is in contrast to standard methodology in which tasks and conditions are reduced to their simplest forms that are remote from daily life experience.  相似文献   

14.
Humans can learn and store multiple visuomotor mappings (dual-adaptation) when feedback for each is provided alternately. Moreover, learned context cues associated with each mapping can be used to switch between the stored mappings. However, little is known about the associative learning between cue and required visuomotor mapping, and how learning generalises to novel but similar conditions. To investigate these questions, participants performed a rapid target-pointing task while we manipulated the offset between visual feedback and movement end-points. The visual feedback was presented with horizontal offsets of different amounts, dependent on the targets shape. Participants thus needed to use different visuomotor mappings between target location and required motor response depending on the target shape in order to “hit” it. The target shapes were taken from a continuous set of shapes, morphed between spiky and circular shapes. After training we tested participants performance, without feedback, on different target shapes that had not been learned previously. We compared two hypotheses. First, we hypothesised that participants could (explicitly) extract the linear relationship between target shape and visuomotor mapping and generalise accordingly. Second, using previous findings of visuomotor learning, we developed a (implicit) Bayesian learning model that predicts generalisation that is more consistent with categorisation (i.e. use one mapping or the other). The experimental results show that, although learning the associations requires explicit awareness of the cues’ role, participants apply the mapping corresponding to the trained shape that is most similar to the current one, consistent with the Bayesian learning model. Furthermore, the Bayesian learning model predicts that learning should slow down with increased numbers of training pairs, which was confirmed by the present results. In short, we found a good correspondence between the Bayesian learning model and the empirical results indicating that this model poses a possible mechanism for simultaneously learning multiple visuomotor mappings.  相似文献   

15.
Existing visual search research has demonstrated that the receipt of reward will be beneficial for subsequent perceptual and attentional processing of features that have characterized targets, but detrimental for processing of features that have characterized irrelevant distractors. Here we report a similar effect of reward on location. Observers completed a visual search task in which they selected a target, ignored a salient distractor, and received random-magnitude reward for correct performance. Results show that when target selection garnered rewarding outcome attention is subsequently a.) primed to return to the target location, and b.) biased away from the location that was occupied by the salient, task-irrelevant distractor. These results suggest that in addition to priming features, reward acts to guide visual search by priming contextual locations of visual stimuli.  相似文献   

16.
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.  相似文献   

17.
Visuomotor interference occurs when the execution of an action is facilitated by the concurrent observation of the same action and hindered by the concurrent observation of a different action. There is evidence that visuomotor interference can be modulated top-down by higher cognitive functions, depending on whether own performed actions or observed actions are selectively attended. Here, we studied whether these effects of cognitive context on visuomotor interference are also dependent on the point-of-view of the observed action. We employed a delayed go/no-go task known to induce visuomotor interference. Static images of hand gestures in either egocentric or allocentric perspective were presented as “go” stimuli after participants were pre-cued to prepare either a matching (congruent) or non-matching (incongruent) action. Participants performed this task in two different cognitive contexts: In one, they focused on the visual image of the hand gesture shown as the go stimulus (image context), whereas in the other they focused on the hand gesture they performed (action context). We analyzed reaction times to initiate the prepared action upon presentation of the gesture image and found evidence of visuomotor interference in both contexts and for both perspectives. Strikingly, results show that the effect of cognitive context on visuomotor interference also depends on the perspective of observed actions. When focusing on own-actions, visuomotor interference was significantly less for gesture images in allocentric perspective than in egocentric perspective; when focusing on observed actions, visuomotor interference was present regardless of the perspective of the gesture image. Overall these data suggest that visuomotor interference may be modulated by higher cognitive processes, so that when we are specifically attending to our own actions, images depicting others’ actions (allocentric perspective) have much less interference on our own actions.  相似文献   

18.
An extensive neuroimaging literature has helped characterize the brain regions involved in navigating a spatial environment. Far less is known, however, about the brain networks involved when learning a spatial layout from a cartographic map. To compare the two means of acquiring a spatial representation, participants learned spatial environments either by directly navigating them or learning them from an aerial-view map. While undergoing functional magnetic resonance imaging (fMRI), participants then performed two different tasks to assess knowledge of the spatial environment: a scene and orientation dependent perceptual (SOP) pointing task and a judgment of relative direction (JRD) of landmarks pointing task. We found three brain regions showing significant effects of route vs. map learning during the two tasks. Parahippocampal and retrosplenial cortex showed greater activation following route compared to map learning during the JRD but not SOP task while inferior frontal gyrus showed greater activation following map compared to route learning during the SOP but not JRD task. We interpret our results to suggest that parahippocampal and retrosplenial cortex were involved in translating scene and orientation dependent coordinate information acquired during route learning to a landmark-referenced representation while inferior frontal gyrus played a role in converting primarily landmark-referenced coordinates acquired during map learning to a scene and orientation dependent coordinate system. Together, our results provide novel insight into the different brain networks underlying spatial representations formed during navigation vs. cartographic map learning and provide additional constraints on theoretical models of the neural basis of human spatial representation.  相似文献   

19.
In baboons trained to perform visuomotor pointing movements, unilateral electrolytic lesions are performed in the dentate nucleus. The consequences of these lesions on the following movement parameters are then studied:--temporal parameters: reaction time and movement time;--spatial parameters: pointing area and directional errors. From the observations taken 3 months after the dentate nucleus exclusion, a distinction can be made between parameters showing recovery phenomena (reaction time and pointing area) and components definitively affected (movement time and directional error).  相似文献   

20.

Background

Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account.

Methodology/Principal Findings

We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side.

Conclusions

While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号