首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

In contrast to traditional views that consider smooth pursuit as a relatively automatic process, evidence has been reported for the importance of attention for accurate pursuit performance. However, the exact role that attention might play in the maintenance of pursuit remains unclear.

Methodology/Principal Findings

We analysed the neuronal activity associated with healthy subjects executing smooth pursuit eye movements (SPEM) during concurrent attentive tracking of a moving sound source, which was either in-phase or in antiphase to the executed eye movements. Assuming that attentional resources must be allocated to the moving sound source, the simultaneous execution of SPEM and auditory tracking in diverging directions should result in increased load on common attentional resources. By using an auditory stimulus as a distractor rather then a visual stimulus we guaranteed that cortical activity cannot be caused by conflicts between two simultaneous visual motion stimuli. Our results revealed that the smooth pursuit task with divided attention led to significantly higher activations bilaterally in the posterior parietal cortex and lateral and medial frontal cortex, presumably containing the parietal, frontal and supplementary eye fields respectively.

Conclusions

The additional cortical activation in these areas is apparently due to the process of dividing attention between the execution of SPEM and the covert tracking of the auditory target. On the other hand, even though attention had to be divided the attentional resources did not seem to be exhausted, since the identification of the direction of the auditory target and the quality of SPEM were unaffected by the congruence between visual and auditory motion stimuli. Finally, we found that this form of task-related attention modulated not only the cortical pursuit network in general but also affected modality specific and supramodal attention regions.  相似文献   

2.

Background

Attention is used to enhance neural processing of selected parts of a visual scene. It increases neural responses to stimuli near target locations and is usually coupled to eye movements. Covert attention shifts, however, decouple the attentional focus from gaze, allowing to direct the attention to a peripheral location without moving the eyes. We tested whether covert attention shifts modulate ongoing neuronal activity in cortical area V6A, an area that provides a bridge between visual signals and arm-motor control.

Methodology/Principal Findings

We performed single cell recordings from 3 Macaca Fascicularis trained to fixate straight-head, while shifting attention outward to a peripheral cue and inward again to the fixation point. We found that neurons in V6A are influenced by spatial attention. The attentional modulation occurs without gaze shifts and cannot be explained by visual stimulations. Visual, motor, and attentional responses can occur in combination in single neurons.

Conclusions/Significance

This modulation in an area primarily involved in visuo-motor transformation for reaching may form a neural basis for coupling attention to the preparation of reaching movements. Our results show that cortical processes of attention are related not only to eye-movements, as many studies have shown, but also to arm movements, a finding that has been suggested by some previous behavioral findings. Therefore, the widely-held view that spatial attention is tightly intertwined with—and perhaps directly derived from—motor preparatory processes should be extended to a broader spectrum of motor processes than just eye movements.  相似文献   

3.
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (30°, 75° and 150°) were examined in three groups in order to vary the task difficulty. The results showed that the 30° group gradually reduced direction errors of reaching with practice and adapted well to the visuomotor rotation. The 75° group made large direction errors of reaching, and the 150° group applied a 180° reversal shift from early practice. The 75°and 150° groups, however, overcompensated the respective rotations at the end of practice. Despite these group differences in adaptive changes of reaching, all groups gradually adapted gaze directions prior to reaching from the target area to the areas related to the final positions of reaching during the course of practice. The adaptive changes of both hand and eye movements in all groups mainly reflected adjustments of movement directions based on explicit knowledge of the applied rotation acquired through practice. Only the 30° group showed small implicit adaptation in both effectors. The results suggest that by adapting gaze directions from the target to the final position of reaching based on explicit knowledge of the visuomotor rotation, the oculomotor system supports the limb-motor system to make precise preplanned adjustments of reaching directions during learning of visuomotor rotation under terminal visual feedback.  相似文献   

4.
We compared the alpha band EEG depression (event-related desynchnization, ERD) level in two tasks, involving activation of different attentional processes: visual search for a deviant relevant stimulus among many similar ones and visual oddball. Control data for the visual search task consisted of simple viewing of several stimuli being of the same shape as the relevant stimulus in the search trials. Gaze position was verified by eye tracking method. We interpreted alpha band ERD as a correlate of activation of attentional processes. Fixating the target in visual search task caused a significantly larger ERD than fixating the same stimuli in control trials over all leads. We suppose this to be related with task and visual environment complexities. The frontal ERD domination may indicate attentional control over voluntary movements execution (top-down attention). The caudal ERD may be related with updating of visual information as a result of search process (bottom-up attention). Both relevant and irrelevant stimuli in the oddball task also induced alpha band ERD, but it was larger in response to relevant one and reached maximum level over occipital leads. Domination of caudal ERD in oddball task is supposed to indicate bottom-up attention processes.  相似文献   

5.
Attention governs action in the primate frontal eye field   总被引:1,自引:0,他引:1  
Schafer RJ  Moore T 《Neuron》2007,56(3):541-551
While the motor and attentional roles of the frontal eye field (FEF) are well documented, the relationship between them is unknown. We exploited the known influence of visual motion on the apparent positions of targets, and measured how this illusion affects saccadic eye movements during FEF microstimulation. Without microstimulation, saccades to a moving grating are biased in the direction of motion, consistent with the apparent position illusion. Here we show that microstimulation of spatially aligned FEF representations increases the influence of this illusion on saccades. Rather than simply impose a fixed-vector signal, subthreshold stimulation directed saccades away from the FEF movement field, and instead more strongly in the direction of visual motion. These results demonstrate that the attentional effects of FEF stimulation govern visually guided saccades, and suggest that the two roles of the FEF work together to select both the features of a target and the appropriate movement to foveate it.  相似文献   

6.
In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets.  相似文献   

7.
Huang TR  Watanabe T 《PloS one》2012,7(4):e35946
Attention plays a fundamental role in visual learning and memory. One highly established principle of visual attention is that the harder a central task is, the more attentional resources are used to perform the task and the smaller amount of attention is allocated to peripheral processing because of limited attention capacity. Here we show that this principle holds true in a dual-task setting but not in a paradigm of task-irrelevant perceptual learning. In Experiment 1, eight participants were asked to identify either bright or dim number targets at the screen center and to remember concurrently presented scene backgrounds. Their recognition performances for scenes paired with dim/hard targets were worse than those for scenes paired with bright/easy targets. In Experiment 2, eight participants were asked to identify either bright or dim letter targets at the screen center while a task-irrelevant coherent motion was concurrently presented in the background. After five days of training on letter identification, participants improved their motion sensitivity to the direction paired with hard/dim targets improved but not to the direction paired with easy/bright targets. Taken together, these results suggest that task-irrelevant stimuli are not subject to the attentional control mechanisms that task-relevant stimuli abide.  相似文献   

8.
We can adapt movements to a novel dynamic environment (e.g., tool use, microgravity, and perturbation) by acquiring an internal model of the dynamics. Although multiple environments can be learned simultaneously if each environment is experienced with different limb movement kinematics, it is controversial as to whether multiple internal models for a particular movement can be learned and flexibly retrieved according to behavioral contexts. Here, we address this issue by using a novel visuomotor task. While participants reached to each of two targets located at a clockwise or counter-clockwise position, a gradually increasing visual rotation was applied in the clockwise or counter-clockwise direction, respectively, to the on-screen cursor representing the unseen hand position. This procedure implicitly led participants to perform physically identical pointing movements irrespective of their intentions (i.e., movement plans) to move their hand toward two distinct visual targets. Surprisingly, if each identical movement was executed according to a distinct movement plan, participants could readily adapt these movements to two opposing force fields simultaneously. The results demonstrate that multiple motor memories can be learned and flexibly retrieved, even for physically identical movements, according to distinct motor plans in a visual space.  相似文献   

9.
This study investigated attentional processes in a sample of captive gibbons. An initial aim of the research was to examine subjects' ability to co-orient with photographic images of both conspecific and human models. The gibbons' expectancies about the focus of another's attention was then also assessed, with an expectancy violation paradigm revealing subjects' sensitivity to an incompatibility between visual orientation and the position of a target object. The gibbons were exposed to two conditions; consistent sequences in which the stimulus individual directed attention towards a target object, and inconsistent sequences in which the model's attentional focus was incompatible with the location of this article. Analyses of the subjects' responses were made according to the direction of gazes and the time spent inspecting the depicted model in each of these conditions. The results reveal a tendency for visual co-orientation with both conspecific and human models, suggesting that gibbons are competent in detecting the visual orientation of other species as well as their own. Furthermore, the subjects' tendency to look longer and check back to the depicted model in response to violations in the relationship between an agent and object (target appearing in an opposite direction to model's gaze), suggests that they possess some knowledge of how visual gaze direction relates to external stimuli.  相似文献   

10.
Disruption of state estimation in the human lateral cerebellum   总被引:1,自引:0,他引:1  
The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate.  相似文献   

11.
Reaching movements towards an object are continuously guided by visual information about the target and the arm. Such guidance increases precision and allows one to adjust the movement if the target unexpectedly moves. On-going arm movements are also influenced by motion in the surrounding. Fast responses to motion in the surrounding could help cope with moving obstacles and with the consequences of changes in one’s eye orientation and vantage point. To further evaluate how motion in the surrounding influences interceptive movements we asked subjects to tap a moving target when it reached a second, static target. We varied the direction and location of motion in the surrounding, as well as details of the stimuli that are known to influence eye movements. Subjects were most sensitive to motion in the background when such motion was near the targets. Whether or not the eyes were moving, and the direction of the background motion in relation to the direction in which the eyes were moving, had very little influence on the response to the background motion. We conclude that the responses to background motion are driven by motion near the target rather than by a global analysis of the optic flow and its relation with other information about self-motion.  相似文献   

12.
Brown LE  Doole R  Malfait N 《PloS one》2011,6(12):e28999
Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented.  相似文献   

13.
Numerous studies have addressed the issue of where people look when they perform hand movements. Yet, very little is known about how visuomotor performance is affected by fixation location. Previous studies investigating the accuracy of actions performed in visual periphery have revealed inconsistent results. While movements performed under full visual-feedback (closed-loop) seem to remain surprisingly accurate, open-loop as well as memory-guided movements usually show a distinct bias (i.e. overestimation of target eccentricity) when executed in periphery. In this study, we aimed to investigate whether gaze position affects movements that are performed under full-vision but cannot be corrected based on a direct comparison between the hand and target position. To do so, we employed a classical visuomotor reaching task in which participants were required to move their hand through a gap between two obstacles into a target area. Participants performed the task in four gaze conditions: free-viewing (no restrictions on gaze), central fixation, or fixation on one of the two obstacles. Our findings show that obstacle avoidance behaviour is moderated by fixation position. Specifically, participants tended to select movement paths that veered away from the obstacle fixated indicating that perceptual errors persist in closed-loop vision conditions if they cannot be corrected effectively based on visual feedback. Moreover, measuring the eye-movement in a free-viewing task (Experiment 2), we confirmed that naturally participants’ prefer to move their eyes and hand to the same spatial location.  相似文献   

14.
Neurons in posterior parietal cortex of the awake, trained monkey respond to passive visual and/or somatosensory stimuli. In general, the receptive fields of these cells are large and nonspecific. When these neurons are studied during visually guided hand movements and eye movements, most of their activity can be accounted for by passive sensory stimulation. However, for some visual cells, the response to a stimulus is enhanced when it is to be the target for a saccadic eye movement. This enhancement is selective for eye movements into the visual receptive field since it does not occur with eye movements to other parts of the visual field. Cells that discharge in association with a visual fixation task have foveal receptive fields and respond to the spots of light used as fixation targets. Cells discharging selectively in association with different directions of tracking eye movements have directionally selective responses to moving visual stimuli. Every cell in our sample discharging in association with movement could be driven by passive sensory stimuli. We conclude that the activity of neurons in posterior parietal cortex is dependent on and indicative of external stimuli but not predictive of movement.  相似文献   

15.
Brain areas exist that appear to be specialized for the coding of visual space surrounding the body (peripersonal space). In marked contrast to neurons in earlier visual areas, cells have been reported in parietal and frontal lobes that effectively respond only when visual stimuli are located in spatial proximity to a particular body part (for example, face, arm or hand) [1-4]. Despite several single-cell studies, the representation of near visual space has scarcely been investigated in humans. Here we focus on the neuropsychological phenomenon of visual extinction following unilateral brain damage. Patients with this disorder may respond well to a single stimulus in either visual field; however, when two stimuli are presented concurrently, the contralesional stimulus is disregarded or poorly identified. Extinction is commonly thought to reflect a pathological bias in selective vision favoring the ipsilesional side under competitive conditions, as a result of the unilateral brain lesion [5-7]. We examined a parietally damaged patient (D.P.) to determine whether visual extinction is modulated by the position of the hands in peripersonal space. We measured the severity of visual extinction in a task which held constant visual and spatial information about stimuli, while varying the distance between hands and stimuli. We found that selection in the affected visual field was remarkably more efficient when visual events were presented in the space near the contralesional finger than far from it. However, the amelioration of extinction dissolved when hands were covered from view, implying that the effect of hand position was not mediated purely through proprioception. These findings illustrate the importance of the spatial relationship between hand position and object location for the internal construction of visual peripersonal space in humans.  相似文献   

16.
In this work we have studied what mechanisms might possibly underlie arm trajectory modification when reaching toward visual targets. The double-step target displacement paradigm was used with inter-stimulus intervals (ISIs) in the range of 10-300 ms. For short ISIs, a high percentage of the movements were found to be initially directed in between the first and second target locations (averaged trajectories). The initial direction of motion was found to depend on the target configuration, and on : the time difference between the presentation of the second stimulus and movement onset. To account for the kinematic features of the averaged trajectories two modification schemes were compared: the superposition scheme and the abort-replan scheme. According to the superposition scheme, the modified trajectories result from the vectorial addition of two elemental motions: one for moving between the initial hand position and an intermediate location, and a second one for moving between that intermediate location and the final target. According to the abort-replan scheme, the initial plan for moving toward the intermediate location is aborted and smoothly replaced by a new plan for moving from the hand position at the time the trajectory is modified to the final target location. In both tested schemes we hypothesized that due to the quick displacement of the stimulus, the internally specified intermediate goal might be influenced by both stimuli and may be different from the location of the first stimulus. It was found that the statistically most successful model in accounting for the measured data is based on the superposition scheme. It is suggested that superposition of simple independent elemental motions might be a general principle for the generation of modified motions, which allows for efficient, parallel planning. For increasing values of the inferred locations of the intermediate targets were found to gradually shift from the first toward the second target locations along a path that curved toward the initial hand position. These inferred locations show a strong resemblance to the intermediate locations of saccades generated in a similar double-step paradigm. These similarities in the specification of target locations used in the generation of eye and hand movements may serve to simplify visuomotor integration. Received: 22 June 1994 / Accepted in revised form: 15 September 1994  相似文献   

17.
Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants’ attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.  相似文献   

18.
Eye movements affect object localization and object recognition. Around saccade onset, briefly flashed stimuli appear compressed towards the saccade target, receptive fields dynamically change position, and the recognition of objects near the saccade target is improved. These effects have been attributed to different mechanisms. We provide a unifying account of peri-saccadic perception explaining all three phenomena by a quantitative computational approach simulating cortical cell responses on the population level. Contrary to the common view of spatial attention as a spotlight, our model suggests that oculomotor feedback alters the receptive field structure in multiple visual areas at an intermediate level of the cortical hierarchy to dynamically recruit cells for processing a relevant part of the visual field. The compression of visual space occurs at the expense of this locally enhanced processing capacity.  相似文献   

19.
Recent studies in motor control have shown that visuomotor rotations for reaching have narrow generalization functions: what we learn during movements in one direction only affects subsequent movements into close directions. Here we wanted to measure the generalization functions for wrist movement. To do so we had 7 subjects performing an experiment holding a mobile phone in their dominant hand. The mobile phone's built in acceleration sensor provided a convenient way to measure wrist movements and to run the behavioral protocol. Subjects moved a cursor on the screen by tilting the phone. Movements on the screen toward the training target were rotated and we then measured how learning of the rotation in the training direction affected subsequent movements in other directions. We find that generalization is local and similar to generalization patterns of visuomotor rotation for reaching.  相似文献   

20.

Background

Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used.

Methodology/Principal Findings

We tested this hypothesis by scanning healthy human participants'' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants'' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position.

Conclusions/Significance

These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号