首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The population-vector analysis was applied to visualize neuronal processes of sensory-to-motor transformation in the prefrontal cortex while two monkeys performed two types of oculomotor delayed-response (ODR) tasks. In a standard ODR task, monkeys were required to make a quick eye movement to where thevisual cue had been presented 3 s before, whereas in R-ODR task, monkeys wererequired to make an eye movement 90°clockwise to the direction that the visual cue had been presented. In both tasks, directions of population vectors calculated from cue- and response-period activity were almost the same as cue directions and saccade directions, respectively, indicating that population vectors of cue- and response-period activity represent information of visual inputs and motor outputs, respectively. To visualize neuronal processes of information transformation, population vectors were calculated every 250 ms during a whole trial. In ODR task, population vectors weredirected the same direction as the cue direction during the delay period. However, in R-ODR task, population vector rotated gradually from the direction similar to the cue direction to the saccade direction during the delay period. These results indicate that visual-to-motor transformation occurs during the delay period and that this process can be visualized by the population-vectoranalysis.  相似文献   

2.
Temporal information is often contained in multi-sensory stimuli, but it is currently unknown how the brain combines e.g. visual and auditory cues into a coherent percept of time. The existing studies of cross-modal time perception mainly support the "modality appropriateness hypothesis", i.e. the domination of auditory temporal cues over visual ones because of the higher precision of audition for time perception. However, these studies suffer from methodical problems and conflicting results. We introduce a novel experimental paradigm to examine cross-modal time perception by combining an auditory time perception task with a visually guided motor task, requiring participants to follow an elliptic movement on a screen with a robotic manipulandum. We find that subjective duration is distorted according to the speed of visually observed movement: The faster the visual motion, the longer the perceived duration. In contrast, the actual execution of the arm movement does not contribute to this effect, but impairs discrimination performance by dual-task interference. We also show that additional training of the motor task attenuates the interference, but does not affect the distortion of subjective duration. The study demonstrates direct influence of visual motion on auditory temporal representations, which is independent of attentional modulation. At the same time, it provides causal support for the notion that time perception and continuous motor timing rely on separate mechanisms, a proposal that was formerly supported by correlational evidence only. The results constitute a counterexample to the modality appropriateness hypothesis and are best explained by Bayesian integration of modality-specific temporal information into a centralized "temporal hub".  相似文献   

3.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

4.
To further elucidate the mechanisms underlying multisensory integration, this study examines the controversial issue of whether congruent inputs from three different sensory sources can enhance the perception of hand movement. Illusory sensations of clockwise rotations of the right hand were induced by either separately or simultaneously stimulating visual, tactile and muscle proprioceptive channels at various intensity levels. For this purpose, mechanical vibrations were applied to the pollicis longus muscle group in the subjects’ wrists, and a textured disk was rotated under the palmar skin of the subjects’ right hands while a background visual scene was projected onto the rotating disk. The elicited kinaesthetic illusions were copied by the subjects in real time and the EMG activity in the adductor and abductor wrist muscles was recorded. The results show that the velocity of the perceived movements and the amplitude of the corresponding motor responses were modulated by the nature and intensity of the stimulation. Combining two sensory modalities resulted in faster movement illusions, except for the case of visuo-tactile co-stimulation. When a third sensory input was added to the bimodal combinations, the perceptual responses increased only when a muscle proprioceptive stimulation was added to a visuo-tactile combination. Otherwise, trisensory stimulation did not override bimodal conditions that already included a muscle proprioceptive stimulation. We confirmed that vision or touch alone can encode the kinematic parameters of hand movement, as is known for muscle proprioception. When these three sensory modalities are available, they contribute unequally to kinaesthesia. In addition to muscle proprioception, the complementary kinaesthetic content of visual or tactile inputs may optimize the velocity estimation of an on-going movement, whereas the redundant kinaesthetic content of the visual and tactile inputs may rather enhance the latency of the perception.  相似文献   

5.
Fukui T  Gomi H 《PloS one》2012,7(5):e34985
Previous studies demonstrated that human motor actions are not always monitored by perceptual awareness and that implicit motor control plays a key role in performing actions. In addition, appropriate evaluation of our own motor behavior is vital for human life. Here we combined a reaching task with a visual backward masking paradigm to induce an implicit motor response that is congruent or incongruent with the visual perception. We used this to investigate (i) how we evaluate such implicit motor response that could be inconsistent with perceptual awareness and (ii) the possible contributions of reaching error, external visual cues, and internal sensorimotor information to this evaluation. Participants were instructed, after each trial, to rate their own reaching performance on a 5-point scale (i.e., smooth-clumsy). They also needed to identify a color presented at a fixation point that could be changed just after the reaching start. The color was linked to the prime-mask congruency (i.e., congruent-green, incongruent-blue) in the practice phase, and then inconsistent pairs (congruent-blue or incongruent-green) were introduced in the test phase. We found early trajectory deviations induced by the invisible prime stimulus, and such implicit motor responses are significantly correlated with the action evaluation score. The results suggest the "conscious" action evaluation is properly monitoring online sensory outcomes derived by implicit motor control. Furthermore, statistical path analyses showed that internal sensorimotor information from the motor behavior modulated by the invisible prime was the predominant cue for the action evaluation, while the color-cue association learned in the practice phase in some cases biases the action evaluation in the test phase.  相似文献   

6.
Abnormalities in motor skills have been regarded as part of the symptomatology characterizing autism spectrum disorder (ASD). It has been estimated that 80 % of subjects with autism display “motor dyspraxia” or clumsiness that are not readily identified in a routine neurological examination. In this study we used behavioral measures, event-related potentials (ERP), and lateralized readiness potential (LRP) to study cognitive and motor preparation deficits contributing to the dyspraxia of autism. A modified Posner cueing task was used to analyze motor preparation abnormalities in children with autism and in typically developing children (N = 30/per group). In this task, subjects engage in preparing motor response based on a visual cue, and then execute a motor movement based on the subsequent imperative stimulus. The experimental conditions, such as the validity of the cue and the spatial location of the target stimuli were manipulated to influence motor response selection, preparation, and execution. Reaction time and accuracy benefited from validly cued targets in both groups, while main effects of target spatial position were more obvious in the autism group. The main ERP findings were prolonged and more negative early frontal potentials in the ASD in incongruent trials in both types of spatial location. The LRP amplitude was larger in incongruent trials and had stronger effect in the children with ASD. These effects were better expressed at the earlier stages of LRP, specifically those related to response selection, and showed difficulties at the cognitive phase of stimulus processing rather that at the motor execution stage. The LRP measures at different stages reflect the chronology of cognitive aspects of movement preparation and are sensitive to manipulations of cue correctness, thus representing very useful biomarker in autism dyspraxia research. Future studies may use more advance and diverse manipulations of movement preparation demands in testing more refined specifics of dyspraxia symptoms to investigate functional connectivity abnormalities underlying motor skills deficits in autism.  相似文献   

7.
The present study investigated the interactions between motor action and cognitive processing with particular reference to kanji-culture individuals. Kanji-culture individuals often move their finger as if they are writing when they are solving cognitive tasks, for example, when they try to recall the spelling of English words. This behavior is called kusho, meaning air-writing in Japanese. However, its functional role is still unknown. To reveal the role of kusho behavior in cognitive processing, we conducted a series of experiments, employing two different cognitive tasks, a construction task and a stroke count task. To distinguish the effects of the kinetic aspects of kusho behavior, we set three hand conditions in the tasks; participants were instructed to use either kusho, unrelated finger movements or do nothing during the response time. To isolate possible visual effects, two visual conditions in which participants saw their hand and the other in which they did not, were introduced. We used the number of correct responses and response time as measures of the task performance. The results showed that kusho behavior has different functional roles in the two types of cognitive tasks. In the construction task, the visual feedback from finger movement facilitated identifying a character, whereas the kinetic feedback or motor commands for the behavior did not help to solve the task. In the stroke count task, by contrast, the kinetic aspects of the finger movements influenced counting performance depending on the type of the finger movement. Regardless of the visual condition, kusho behavior improved task performance and unrelated finger movements degraded it. These results indicated that motor behavior contributes to cognitive processes. We discussed possible mechanisms of the modality dependent contribution. These findings might lead to better understanding of the complex interaction between action and cognition in daily life.  相似文献   

8.
To further characterize the role of frontal and parietal cortices in rat cognition, we recorded action potentials simultaneously from multiple sites in the medio-dorsal frontal cortex and posterior parietal cortex of rats while they performed a two-choice auditory detection task. We quantified neural correlates of task performance, including response movements, perception of a target tone, and the differentiation between stimuli with distinct features (different pitches or durations). A minority of units—15% in frontal cortex, 23% in parietal cortex—significantly distinguished hit trials (successful detections, response movement to the right) from correct rejection trials (correct leftward response to the absence of the target tone). Estimating the contribution of movement-related activity to these responses suggested that more than half of these units were likely signaling correct perception of the auditory target, rather than merely movement direction. In addition, we found a smaller and mostly not overlapping population of units that differentiated stimuli based on task-irrelevant details. The detection-related spiking responses we observed suggest that correlates of perception in the rat are sparsely represented among neurons in the rat''s frontal-parietal network, without being concentrated preferentially in frontal or parietal areas.  相似文献   

9.
The aim of this study was to verify the contribution of haptic and auditory cues in the quick discrimination of an object mass. Ten subjects had to brake with the right hand the movement of a cup due to the falling impact of an object that could be of two different masses. They were asked to perform a quick left hand movement if the object was of the prescribed mass according to the proprioceptive and auditory cues they received from object contact with the cup and did not react to the other object. Three conditions were established: with both proprioceptive and auditory cues, only with proprioceptive cue or only with an auditory cue. When proprioceptive information was available subjects advanced responses time to the impact of the heavy object as compared with that of the light object. The addition of an auditory cue did not improve the advancement for the heavy object. We conclude that when a motor response has to be chosen according to different combinations of auditory and proprioceptive load-related information, subjects used mainly haptic information to fast respond and that auditory cues do not add relevant information that could ameliorate the quickness of a correct response.  相似文献   

10.

Background

In the continuum between a stroke and a circle including all possible ellipses, some eccentricities seem more “biologically preferred” than others by the motor system, probably because they imply less demanding coordination patterns. Based on the idea that biological motion perception relies on knowledge of the laws that govern the motor system, we investigated whether motorically preferential and non-preferential eccentricities are visually discriminated differently. In contrast with previous studies that were interested in the effect of kinematic/time features of movements on their visual perception, we focused on geometric/spatial features, and therefore used a static visual display.

Methodology/Principal Findings

In a dual-task paradigm, participants visually discriminated 13 static ellipses of various eccentricities while performing a finger-thumb opposition sequence with either the dominant or the non-dominant hand. Our assumption was that because the movements used to trace ellipses are strongly lateralized, a motor task performed with the dominant hand should affect the simultaneous visual discrimination more strongly. We found that visual discrimination was not affected when the motor task was performed by the non-dominant hand. Conversely, it was impaired when the motor task was performed with the dominant hand, but only for the ellipses that we defined as preferred by the motor system, based on an assessment of individual preferences during an independent graphomotor task.

Conclusions/Significance

Visual discrimination of ellipses depends on the state of the motor neural networks controlling the dominant hand, but only when their eccentricity is “biologically preferred”. Importantly, this effect emerges on the basis of a static display, suggesting that what we call “biological geometry”, i.e., geometric features resulting from preferential movements is relevant information for the visual processing of bidimensional shapes.  相似文献   

11.
Animals can make faster behavioral responses to multisensory stimuli than to unisensory stimuli. The superior colliculus (SC), which receives multiple inputs from different sensory modalities, is considered to be involved in the initiation of motor responses. However, the mechanism by which multisensory information facilitates motor responses is not yet understood. Here, we demonstrate that multisensory information modulates competition among SC neurons to elicit faster responses. We conducted multiunit recordings from the SC of rats performing a two-alternative spatial discrimination task using auditory and/or visual stimuli. We found that a large population of SC neurons showed direction-selective activity before the onset of movement in response to the stimuli irrespective of stimulation modality. Trial-by-trial correlation analysis showed that the premovement activity of many SC neurons increased with faster reaction speed for the contraversive movement, whereas the premovement activity of another population of neurons decreased with faster reaction speed for the ipsiversive movement. When visual and auditory stimuli were presented simultaneously, the premovement activity of a population of neurons for the contraversive movement was enhanced, whereas the premovement activity of another population of neurons for the ipsiversive movement was depressed. Unilateral inactivation of SC using muscimol prolonged reaction times of contraversive movements, but it shortened those of ipsiversive movements. These findings suggest that the difference in activity between the SC hemispheres regulates the reaction speed of motor responses, and multisensory information enlarges the activity difference resulting in faster responses.  相似文献   

12.
Reaction time (RT) and number of correct responses to letter stimuli were measured in 49 male and 47 female subjects under conditions of crossed and uncrossed lateralization of a pair of stimuli and the hand that performed motor reaction. Gender differences were detected on the basis of RT in the hemispheric organization when a task was performed. Males reacted more rapidly to the stimuli presented in the right visual field, while females demonstrated no lateral effects. There was a significant difference in the case of the males' right hand between crossed and uncrossed lateralization of visual stimuli, which exceeded that in the females and in the case of the males' left hand. There were no gender differences in the number of correct responses. When motor response was performed by either hand, the number of correct responses was significantly higher when the stimuli were presented in the right visual field.  相似文献   

13.
Attention is crucial for visual perception because it allows the visual system to effectively use its limited resources by selecting behaviorally and cognitively relevant stimuli from the large amount of information impinging on the eyes. Reflexive, stimulus-driven attention is essential for successful interactions with the environment because it can, for example, speed up responses to life-threatening events. It is commonly believed that exogenous attention operates in the retinotopic coordinates of the early visual system. Here, using a novel experimental paradigm [1], we show that a nonretinotopic cue improves both accuracy and reaction times in a visual search task. Furthermore, the influence of the cue is limited both in space and time, a characteristic typical of exogenous cueing. These and other recent findings show that many more aspects of vision are processed nonretinotopically than previously thought.  相似文献   

14.
To produce skilled movements, the brain flexibly adapts to different task requirements and movement contexts. Two core abilities underlie this flexibility. First, depending on the task, the motor system must rapidly switch the way it produces motor commands and how it corrects movements online, i.e. it switches between different (feedback) control policies. Second, it must also adapt to environmental changes for different tasks separately. Here we show these two abilities are related. In a bimanual movement task, we show that participants can switch on a movement-by-movement basis between two feedback control policies, depending only on a static visual cue. When this cue indicates that the hands control separate objects, reactions to force field perturbations of each arm are purely unilateral. In contrast, when the visual cue indicates a commonly controlled object, reactions are shared across hands. Participants are also able to learn different force fields associated with a visual cue. This is however only the case when the visual cue is associated with different feedback control policies. These results indicate that when the motor system can flexibly switch between different control policies, it is also able to adapt separately to the dynamics of different environmental contexts. In contrast, visual cues that are not associated with different control policies are not effective for learning different task dynamics.  相似文献   

15.
In this study we compared tactile and visual feedbacks for the motor imagery-based brain–computer interface (BCI) in five healthy subjects. A vertical green bar from the center of the fixing cross to the edge of the screen was used as visual feedback. Vibration motors that were placed on the forearms of the right and the left hands and on the back of the subject’s neck were used as tactile feedback. A vibration signal was used to confirm the correct classification of the EEG patterns of the motor imagery of right and left hand movements and the rest task. The accuracy of recognition in the classification of the three states (right hand movement, left hand movement, and rest) in the BCI without feedback exceeded the random level (33% for the three states) for all the subjects and was rather high (67.8% ± 13.4% (mean ± standard deviation)). Including the visual and tactile feedback in the BCI did not significantly change the mean accuracy of recognition of mental states for all the subjects (70.5% ± 14.8% for the visual feedback and 65.9% ± 12.4% for the tactile feedback). The analysis of the dynamics of the movement imagery skill in BCI users with the tactile and visual feedback showed no significant differences between these types of feedback. Thus, it has been found that the tactile feedback can be used in the motor imagery-based BCI instead of the commonly used visual feedback, which greatly expands the possibilities of the practical application of the BCI.  相似文献   

16.
Franklin DW  So U  Burdet E  Kawato M 《PloS one》2007,2(12):e1336

Background

When learning to perform a novel sensorimotor task, humans integrate multi-modal sensory feedback such as vision and proprioception in order to make the appropriate adjustments to successfully complete the task. Sensory feedback is used both during movement to control and correct the current movement, and to update the feed-forward motor command for subsequent movements. Previous work has shown that adaptation to stable dynamics is possible without visual feedback. However, it is not clear to what degree visual information during movement contributes to this learning or whether it is essential to the development of an internal model or impedance controller.

Methodology/Principle Findings

We examined the effects of the removal of visual feedback during movement on the learning of both stable and unstable dynamics in comparison with the case when both vision and proprioception are available. Subjects were able to learn to make smooth movements in both types of novel dynamics after learning with or without visual feedback. By examining the endpoint stiffness and force after learning it could be shown that subjects adapted to both types of dynamics in the same way whether they were provided with visual feedback of their trajectory or not. The main effects of visual feedback were to increase the success rate of movements, slightly straighten the path, and significantly reduce variability near the end of the movement.

Conclusions/Significance

These findings suggest that visual feedback of the hand during movement is not necessary for the adaptation to either stable or unstable novel dynamics. Instead vision appears to be used to fine-tune corrections of hand trajectory at the end of reaching movements.  相似文献   

17.
Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues.  相似文献   

18.
Two parts of a geometrical figure are consecutively presented to healthy adult subjects in the left and right visual fields; the subjects have to compare them mentally and to decide whether these parts form a standard figure or not. Correctness of the reaction is controlled by a computer which lights up on the screen the words "good" or "error". The number of correct decisions of this visual-spatial task does not depend on the hemisphere to which information is addressed. The reaction time is substantially shorter if the information comes "directly" to the right hemisphere. Due to better training in the left hemisphere interhemispheric difference in reaction time gradually disappears in repeated tests. Training to mental "constructing" takes place only in the tests following positive feedback stimulus. Analysis of amplitude-temporal parameters of P300 wave shows that at correct decision of the visual-spatial task the level of activation in the right hemisphere is higher than in the left one.  相似文献   

19.

Background

The value of a predicted reward can be estimated based on the conjunction of both the intrinsic reward value and the length of time to obtain it. The question we addressed is how the two aspects, reward size and proximity to reward, influence the responses of neurons in rostral anterior cingulate cortex (rACC), a brain region thought to play an important role in reward processing.

Methods and Findings

We recorded from single neurons while two monkeys performed a multi-trial reward schedule task. The monkeys performed 1–4 sequential color discrimination trials to obtain a reward of 1–3 liquid drops. There were two task conditions, a valid cue condition, where the number of trials and reward amount were associated with visual cues, and a random cue condition, where the cue was picked from the cue set at random. In the valid cue condition, the neuronal firing is strongly modulated by the predicted reward proximity during the trials. Information about the predicted reward amount is almost absent at those times. In substantial subpopulations, the neuronal responses decreased or increased gradually through schedule progress to the predicted outcome. These two gradually modulating signals could be used to calculate the effect of time on the perception of reward value. In the random cue condition, little information about the reward proximity or reward amount is encoded during the course of the trial before reward delivery, but when the reward is actually delivered the responses reflect both the reward proximity and reward amount.

Conclusions

Our results suggest that the rACC neurons encode information about reward proximity and amount in a manner that is dependent on utility of reward information. The manner in which the information is represented could be used in the moment-to-moment calculation of the effect of time and amount on predicted outcome value.  相似文献   

20.

Background

Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal.

Methods

A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment.

Results

No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research.

Conclusion

The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual directional smelling in humans.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号