首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There has been a recent and dramatic growth of interest in the psychological and neural mechanisms of multisensory integration between different sensory modalities. Much of this recent research has focused specifically on how multisensory representations of body parts and of the 'peripersonal' space immediately around them, are constructed. Research has also focused on how this may lead to multisensorially determined perceptions of body parts, to action execution, and even to attributions of agency and self-ownership for the body parts in question. Converging evidence from animal and human studies suggests that the primate brain constructs various body-part-centred representations of space, based on the integration of visual, tactile and proprioceptive information. These representations can plastically change following active tool-use that extends reachable space and also modifies the representation of peripersonal space. These new results indicate that a modern cognitive neuroscience approach to the classical concept of the 'body schema' may now be within reach.  相似文献   

2.
In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets.  相似文献   

3.

Background

A stimulus approaching the body requires fast processing and appropriate motor reactions. In monkeys, fronto-parietal networks are involved both in integrating multisensory information within a limited space surrounding the body (i.e. peripersonal space, PPS) and in action planning and execution, suggesting an overlap between sensory representations of space and motor representations of action. In the present study we investigate whether these overlapping representations also exist in the human brain.

Methodology/Principal Findings

We recorded from hand muscles motor-evoked potentials (MEPs) induced by single-pulse of transcranial magnetic stimulation (TMS) after presenting an auditory stimulus either near the hand or in far space. MEPs recorded 50 ms after the near-sound onset were enhanced compared to MEPs evoked after far sounds. This near-far modulation faded at longer inter-stimulus intervals, and reversed completely for MEPs recorded 300 ms after the sound onset. At that time point, higher motor excitability was associated with far sounds. Such auditory modulation of hand motor representation was specific to a hand-centred, and not a body-centred reference frame.

Conclusions/Significance

This pattern of corticospinal modulation highlights the relation between space and time in the PPS representation: an early facilitation for near stimuli may reflect immediate motor preparation, whereas, at later time intervals, motor preparation relates to distant stimuli potentially approaching the body.  相似文献   

4.
When we look at a stationary object, the perceived direction of gaze (where we are looking) is aligned with the physical direction of eyes (where our eyes are oriented) by which the object is foveated. However, this alignment may not hold in a dynamic situation. Our experiments assessed the perceived locations of two brief stimuli (1 ms) simultaneously displayed at two different physical locations during a saccade. The first stimulus was in the instantaneous location to which the eyes were oriented and the second one was always in the same location as the initial fixation point. When the timing of these stimuli was changed intra-saccadically, their perceived locations were dissociated. The first stimuli were consistently perceived near the target that will be foveated at saccade termination. The second stimuli once perceived near the target location, shifted in the direction opposite to that of saccades, as its latency from saccades increased. These results suggested an independent adjustment of gaze orientation from the physical orientation of eyes during saccades. The spatial dissociation of two stimuli may reflect sensorimotor control of gaze during saccades.  相似文献   

5.
Neuropsychological and functional MRI data suggest that two functionally and anatomically dissociable streams of visual processing exist: a ventral perception-related stream and a dorsal action-related stream. However, relatively little is known about how the two streams interact in the intact brain during the production of adaptive behavior. Using functional MRI and a virtual three-dimensional paradigm, we aimed at examining whether the parieto-occipital junction (POJ) acts as an interface for the integration and processing of information between the dorsal and ventral streams in the near and far space processing. Virtual reality three-dimensional near and far space was defined by manipulating binocular disparity, with -68.76 arcmin crossed disparity for near space and +68.76 arcmin uncrossed disparity for near space. Our results showed that the POJ and bilateral superior occipital gyrus (SOG) showed relative increased activity when responded to targets presented in the near space than in the far space, which was independent of the retinotopic and perceived sizes of target. Furthermore, the POJ showed the enhanced functional connectivity with both the dorsal and ventral streams during the far space processing irrespective of target sizes, supporting that the POJ acts as an interface between the dorsal and ventral streams in disparity-defined near and far space processing. In contrast, the bilateral SOG showed the enhanced functional connectivity only with the ventral stream if retinotopic sizes of targets in the near and far spaces were matched, which suggested there was a functional dissociation between the POJ and bilateral SOG.  相似文献   

6.
Manipulation of hand posture, such as crossing the hands, has been frequently used to study how the body and its immediately surrounding space are represented in the brain. Abundant data show that crossed arms posture impairs remapping of tactile stimuli from somatotopic to external space reference frame and deteriorates performance on several tactile processing tasks. Here we investigated how impaired tactile remapping affects the illusory self-touch, induced by the non-visual variant of the rubber hand illusion (RHI) paradigm. In this paradigm blindfolded participants (Experiment 1) had their hands either uncrossed or crossed over the body midline. The strength of illusory self-touch was measured with questionnaire ratings and proprioceptive drift. Our results showed that, during synchronous tactile stimulation, the strength of illusory self-touch increased when hands were crossed compared to the uncrossed posture. Follow-up experiments showed that the increase in illusion strength was not related to unfamiliar hand position (Experiment 2) and that it was equally strengthened regardless of where in the peripersonal space the hands were crossed (Experiment 3). However, while the boosting effect of crossing the hands was evident from subjective ratings, the proprioceptive drift was not modulated by crossed posture. Finally, in contrast to the illusion increase in the non-visual RHI, the crossed hand postures did not alter illusory ownership or proprioceptive drift in the classical, visuo-tactile version of RHI (Experiment 4). We argue that the increase in illusory self-touch is related to misalignment of somatotopic and external reference frames and consequently inadequate tactile-proprioceptive integration, leading to re-weighting of the tactile and proprioceptive signals.The present study not only shows that illusory self-touch can be induced by crossing the hands, but importantly, that this posture is associated with a stronger illusion.  相似文献   

7.

Background

We physically interact with external stimuli when they occur within a limited space immediately surrounding the body, i.e., Peripersonal Space (PPS). In the primate brain, specific fronto-parietal areas are responsible for the multisensory representation of PPS, by integrating tactile, visual and auditory information occurring on and near the body. Dynamic stimuli are particularly relevant for PPS representation, as they might refer to potential harms approaching the body. However, behavioural tasks for studying PPS representation with moving stimuli are lacking. Here we propose a new dynamic audio-tactile interaction task in order to assess the extension of PPS in a more functionally and ecologically valid condition.

Methodology/Principal Findings

Participants vocally responded to a tactile stimulus administered at the hand at different delays from the onset of task-irrelevant dynamic sounds which gave the impression of a sound source either approaching or receding from the subject’s hand. Results showed that a moving auditory stimulus speeded up the processing of a tactile stimulus at the hand as long as it was perceived at a limited distance from the hand, that is within the boundaries of PPS representation. The audio-tactile interaction effect was stronger when sounds were approaching compared to when sounds were receding.

Conclusion/Significance

This study provides a new method to dynamically assess PPS representation: The function describing the relationship between tactile processing and the position of sounds in space can be used to estimate the location of PPS boundaries, along a spatial continuum between far and near space, in a valuable and ecologically significant way.  相似文献   

8.
The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide—without engaging in explicit action—whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas.  相似文献   

9.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

10.
Current accounts of spatial cognition and human-object interaction suggest that the representation of peripersonal space depends on an action-specific system that remaps its representation according to action requirements. Here we demonstrate that this mechanism is sensitive to knowledge about properties of objects. In two experiments we explored the interaction between physical distance and object attributes (functionality, desirability, graspability, etc.) through a reaching estimation task in which participants indicated if objects were near enough to be reached. Using both a real and a cutting-edge digital scenario, we demonstrate that perceived reaching distance is influenced by ease of grasp and the affective valence of an object. Objects with a positive affective valence tend to be perceived reachable at locations at which neutral or negative objects are perceived as non-reachable. In addition to this, reaction time to distant (non-reachable) positive objects suggests a bias to perceive positive objects as closer than negative and neutral objects (exp. 2). These results highlight the importance of the affective valence of objects in the action-specific mapping of the peripersonal/extrapersonal space system.  相似文献   

11.
Visual perception is based on both incoming sensory signals and information about ongoing actions. Recordings from single neurons have shown that corollary discharge signals can influence visual representations in parietal, frontal and extrastriate visual cortex, as well as the superior colliculus (SC). In each of these areas, visual representations are remapped in conjunction with eye movements. Remapping provides a mechanism for creating a stable, eye-centred map of salient locations. Temporal and spatial aspects of remapping are highly variable from cell to cell and area to area. Most neurons in the lateral intraparietal area remap stimulus traces, as do many neurons in closely allied areas such as the frontal eye fields the SC and extrastriate area V3A. Remapping is not purely a cortical phenomenon. Stimulus traces are remapped from one hemifield to the other even when direct cortico-cortical connections are removed. The neural circuitry that produces remapping is distinguished by significant plasticity, suggesting that updating of salient stimuli is fundamental for spatial stability and visuospatial behaviour. These findings provide new evidence that a unified and stable representation of visual space is constructed by redundant circuitry, comprising cortical and subcortical pathways, with a remarkable capacity for reorganization.  相似文献   

12.
Spatial updating in human parietal cortex   总被引:13,自引:0,他引:13  
Merriam EP  Genovese CR  Colby CL 《Neuron》2003,39(2):361-373
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.  相似文献   

13.
Brown LE  Doole R  Malfait N 《PloS one》2011,6(12):e28999
Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented.  相似文献   

14.
Disruption of space perception due to cortical lesions   总被引:1,自引:0,他引:1  
Landis T 《Spatial Vision》2000,13(2-3):179-191
Space control has for a long time been considered a unitary function. The structure associated with this function was the right parietal lobe. Hemispheric specialization for space appeared to make it automatically a human-specific function. However, recent primate research shows different regions of the parietal lobes to be differently involved with space control. A review of the literature, together with own cases shows that there is ample evidence of a modular organization of space control in humans on the basis of specific deficits subsequent to circumscribed cerebral lesions. Lesions influence differentially retinotopic, spatiotopic, egocentric, and allocentric frames of references. They also influence differentially the attention to far or near space, or to global or local features of space. Moreover, preattentive processes can be studied in the neglected hemispace of humans and prove to be sensible to the meaning of visual stimuli. Space representation and attentional mechanisms that seem to operate on these representations are organized in our brain in a very modular fashion, similar to the modularity of visual submodalities. There is probably not a unified space representation in the parietal lobes, but distributed functional modules. Thus, the study of visual optic recognition, either by the brain or by machines, is inconceivable without considering space, attention and awareness.  相似文献   

15.
In Experiment I rats were trained on a discriminated Y-maze active avoidance task following administration of saline or one of three dosages (.75, 1.50 or 3.0 mg/kg) of d-amphetamine. The six measures recorded simultaneously during each session indicated that the avoidance facilitation produced by d-amphetamine was due to attenuation of shock-induced behavioral suppression resulting in a behavioral baseline more compatible with the animal's associating running with shock avoidance. Results from Experiment II showed that the avoidance decrement following drug termination is dependent on training dosage and whether the drug is abruptly or gradually withdrawn. This experiment further suggested that the disruption is due to dissociation between the drug and non-drug states and could be attenuated by gradually withdrawing the drug over training sessions.  相似文献   

16.
Sparse representation of sounds in the unanesthetized auditory cortex   总被引:2,自引:0,他引:2  
How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli) in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second) in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second). At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.  相似文献   

17.
The experiments reported herein probe the visual cortical mechanisms that control near-far percepts in response to two-dimensional stimuli. Figural contrast is found to be a principal factor for the emergence of percepts of near versus far in pictorial stimuli, especially when stimulus duration is brief. Pictorial factors such as interposition (Experiment 1) and partial occlusion (Experiments 2 and 3) may cooperate, as generally predicted by cue combination models, or compete with contrast factors in the manner predicted by the FACADE model. In particular, if the geometrical configuration of an image favors activation of cortical bipole grouping cells, as at the top of a T-junction, then this advantage can cooperate with the contrast of the configuration to facilitate a near-far percept at a lower contrast than at an X-junction. Varying the exposure duration of the stimuli shows that the more balanced bipole competition in the X-junction case takes longer exposures to resolve than the bipole competition in the T-junction case (Experiment 3).  相似文献   

18.
In two experiments, human participants searched in dynamic three-dimensional virtual-environment rectangular enclosures. Unlike previous studies involving learning of features and geometry, we trained features and geometry separately before placing them in conflict. Specifically, participants learned to respond to rewarded features located along the principle axis of a rectangular search space and to respond to rewarded geometry of a rectangular search space in separate training phases followed by a single test trial. During the test trial, features and geometry were placed in conflict by situating rewarded bins during feature training in unrewarded geometric corners from geometry training and unrewarded bins during feature training in rewarded geometric corners from geometry training. Results of Experiment 1 indicated that although all participants learned features and geometry at an equivalent rate and to an equivalent level, performance during the test trial indicated no preferential responding to features or geometry. However, choice reaction time was significantly longer during the test trial compared to that of last feature and last geometry training trials. Experiment 2 attempted to dissociate information content of features and geometry from their acquired associative strength by rewarding only one geometric corner during geometry training. Results of Experiment 2 indicated that although features had presumably acquired greater associative strength relative to that of geometry by the end of training, performance during the test trial indicated no preferential responding to features or geometry. As in Experiment 1, choice reaction time was significantly longer during the test trial compared to that of last feature and last geometry training trials. Collectively, results seem to provide converging evidence against a view-based matching account of spatial learning, appear inconsistent with standard associative-based accounts of spatial learning, and suggest that information content of spatial cues may play an important role in spatial learning.  相似文献   

19.
Different samples occasioning the same reinforced comparison response in matching-to-sample are interchangeable for one another outside of original training. The present studies were designed to verify the role of these common responses in producing acquired sample equivalence by explicitly varying the presence or absence of this commonality during training. In each of the two experiments, one group of pigeons made the same reinforced choice response following multiple sample stimuli, whereas controls either made different reinforced choices following each sample (Experiment 1) or reinforced choices after only two of four center-key stimuli (Experiment 2). Later, two of the original samples/center-key stimuli were established as conditional cues for new comparison responses, after which the ability of the remaining samples/center-key stimuli to occasion those new responses was assessed. Following common-response training, matching accuracy was higher on class-consistent than on class-inconsistent transfer tests, whereas accuracy in the controls was generally intermediate between these two extremes, a pattern similar to that reported in the human paired-associate literature. These findings confirm that occasioning the same reinforced choice response is one means by which disparate samples become functionally equivalent.  相似文献   

20.
Figuring space by time.   总被引:2,自引:0,他引:2  
E Ahissar  A Arieli 《Neuron》2001,32(2):185-201
Sensory information is encoded both in space and in time. Spatial encoding is based on the identity of activated receptors, while temporal encoding is based on the timing of activation. In order to generate accurate internal representations of the external world, the brain must decode both types of encoded information, even when processing stationary stimuli. We review here evidence in support of a parallel processing scheme for spatially and temporally encoded information in the tactile system and discuss the advantages and limitations of sensory-derived temporal coding in both the tactile and visual systems. Based on a large body of data, we propose a dynamic theory for vision, which avoids the impediments of previous dynamic theories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号