首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.

Background

We physically interact with external stimuli when they occur within a limited space immediately surrounding the body, i.e., Peripersonal Space (PPS). In the primate brain, specific fronto-parietal areas are responsible for the multisensory representation of PPS, by integrating tactile, visual and auditory information occurring on and near the body. Dynamic stimuli are particularly relevant for PPS representation, as they might refer to potential harms approaching the body. However, behavioural tasks for studying PPS representation with moving stimuli are lacking. Here we propose a new dynamic audio-tactile interaction task in order to assess the extension of PPS in a more functionally and ecologically valid condition.

Methodology/Principal Findings

Participants vocally responded to a tactile stimulus administered at the hand at different delays from the onset of task-irrelevant dynamic sounds which gave the impression of a sound source either approaching or receding from the subject’s hand. Results showed that a moving auditory stimulus speeded up the processing of a tactile stimulus at the hand as long as it was perceived at a limited distance from the hand, that is within the boundaries of PPS representation. The audio-tactile interaction effect was stronger when sounds were approaching compared to when sounds were receding.

Conclusion/Significance

This study provides a new method to dynamically assess PPS representation: The function describing the relationship between tactile processing and the position of sounds in space can be used to estimate the location of PPS boundaries, along a spatial continuum between far and near space, in a valuable and ecologically significant way.  相似文献   

2.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

3.

Background

Corticospinal excitability of the primary motor cortex (M1) representing the hand muscle is depressed by bilateral lower limb muscle fatigue. The effects of fatiguing unilateral lower limb contraction on corticospinal excitability and transcallosal inhibition in the M1 hand areas remain unclear. The purpose of this study was to determine the effects of fatiguing unilateral plantar flexions on corticospinal excitability in the M1 hand areas and transcallosal inhibition originated from the M1 hand area contralateral to the fatigued ankle.

Methods

Ten healthy volunteers (26.2 ± 3.8 years) participated in the study. Using transcranial magnetic stimulation, we examined motor evoked potentials (MEPs) and interhemispheric inhibition (IHI) recorded from resting first dorsal interosseous (FDI) muscles before, immediately after, and 10 min after fatiguing unilateral lower limb muscle contraction, which was consisted of 40 unilateral maximal isometric plantar flexions intermittently with a 2-s contraction followed by 1 s of rest.

Results

We demonstrated no significant changes in MEPs in the FDI muscle ipsilateral to the fatigued ankle and decrease in IHI from the M1 hand area contralateral to the fatigued ankle to the ipsilateral M1 hand area after the fatiguing contraction. MEPs in the FDI muscle contralateral to the fatigued ankle were increased after the fatiguing contraction.

Conclusions

These results suggest that fatiguing unilateral lower limb muscle contraction differently influences corticospinal excitability of the contralateral M1 hand area and IHI from the contralateral M1 hand area to the ipsilateral M1 hand area. Although fatiguing unilateral lower limb muscle contraction increases corticospinal excitability of the ipsilateral M1 hand area, the increased corticospinal excitability is not associated with the decreased IHI.  相似文献   

4.

Background

Decoding of frequency-modulated (FM) sounds is essential for phoneme identification. This study investigates selectivity to FM direction in the human auditory system.

Methodology/Principal Findings

Magnetoencephalography was recorded in 10 adults during a two-tone adaptation paradigm with a 200-ms interstimulus-interval. Stimuli were pairs of either same or different frequency modulation direction. To control that FM repetition effects cannot be accounted for by their on- and offset properties, we additionally assessed responses to pairs of unmodulated tones with either same or different frequency composition. For the FM sweeps, N1m event-related magnetic field components were found at 103 and 130 ms after onset of the first (S1) and second stimulus (S2), respectively. This was followed by a sustained component starting at about 200 ms after S2. The sustained response was significantly stronger for stimulation with the same compared to different FM direction. This effect was not observed for the non-modulated control stimuli.

Conclusions/Significance

Low-level processing of FM sounds was characterized by repetition enhancement to stimulus pairs with same versus different FM directions. This effect was FM-specific; it did not occur for unmodulated tones. The present findings may reflect specific interactions between frequency separation and temporal distance in the processing of consecutive FM sweeps.  相似文献   

5.
Yamamoto K  Kawabata H 《PloS one》2011,6(12):e29414

Background

We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF). DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique.

Methods and Findings

Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms) during three minutes to induce ‘Lag Adaptation’. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase.

Conclusions

These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.  相似文献   

6.

Background

Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication.

Methodology/Principal Findings

We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata) pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s) matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area.

Conclusions/Significance

Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.  相似文献   

7.

Background

Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements.

Methodology/Principal Findings

We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound.

Conclusions/Significance

Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action.  相似文献   

8.

Background

Paired associative stimulation (PAS) consisting of repeated application of transcranial magnetic stimulation (TMS) pulses and contingent exteroceptive stimuli has been shown to induce neuroplastic effects in the motor and somatosensory system. The objective was to investigate whether the auditory system can be modulated by PAS.

Methods

Acoustic stimuli (4 kHz) were paired with TMS of the auditory cortex with intervals of either 45 ms (PAS(45 ms)) or 10 ms (PAS(10 ms)). Two-hundred paired stimuli were applied at 0.1 Hz and effects were compared with low frequency repetitive TMS (rTMS) at 0.1 Hz (200 stimuli) and 1 Hz (1000 stimuli) in eleven healthy students. Auditory cortex excitability was measured before and after the interventions by long latency auditory evoked potentials (AEPs) for the tone (4 kHz) used in the pairing, and a control tone (1 kHz) in a within subjects design.

Results

Amplitudes of the N1-P2 complex were reduced for the 4 kHz tone after both PAS(45 ms) and PAS(10 ms), but not after the 0.1 Hz and 1 Hz rTMS protocols with more pronounced effects for PAS(45 ms). Similar, but less pronounced effects were observed for the 1 kHz control tone.

Conclusion

These findings indicate that paired associative stimulation may induce tonotopically specific and also tone unspecific human auditory cortex plasticity.  相似文献   

9.

Background

Theories of embodied language suggest that the motor system is differentially called into action when processing motor-related versus abstract content words or sentences. It has been recently shown that processing negative polarity action-related sentences modulates neural activity of premotor and motor cortices.

Methods and Findings

We sought to determine whether reading negative polarity sentences brought about differential modulation of cortico-spinal motor excitability depending on processing hand-action related or abstract sentences. Facilitatory paired-pulses Transcranial Magnetic Stimulation (pp-TMS) was applied to the primary motor representation of the right-hand and the recorded amplitude of induced motor-evoked potentials (MEP) was used to index M1 activity during passive reading of either hand-action related or abstract content sentences presented in both negative and affirmative polarity. Results showed that the cortico-spinal excitability was affected by sentence polarity only in the hand-action related condition. Indeed, in keeping with previous TMS studies, reading positive polarity, hand action-related sentences suppressed cortico-spinal reactivity. This effect was absent when reading hand action-related negative polarity sentences. Moreover, no modulation of cortico-spinal reactivity was associated with either negative or positive polarity abstract sentences.

Conclusions

Our results indicate that grammatical cues prompting motor negation reduce the cortico-spinal suppression associated with affirmative action sentences reading and thus suggest that motor simulative processes underlying the embodiment may involve even syntactic features of language.  相似文献   

10.

Background

The duration of sounds can affect the perceived duration of co-occurring visual stimuli. However, it is unclear whether this is limited to amodal processes of duration perception or affects other non-temporal qualities of visual perception.

Methodology/Principal Findings

Here, we tested the hypothesis that visual sensitivity - rather than only the perceived duration of visual stimuli - can be affected by the duration of co-occurring sounds. We found that visual detection sensitivity (d’) for unimodal stimuli was higher for stimuli of longer duration. Crucially, in a cross-modal condition, we replicated previous unimodal findings, observing that visual sensitivity was shaped by the duration of co-occurring sounds. When short visual stimuli (∼24 ms) were accompanied by sounds of matching duration, visual sensitivity was decreased relative to the unimodal visual condition. However, when the same visual stimuli were accompanied by longer auditory stimuli (∼60–96 ms), visual sensitivity was increased relative to the performance for ∼24 ms auditory stimuli. Across participants, this sensitivity enhancement was observed within a critical time window of ∼60–96 ms. Moreover, the amplitude of this effect correlated with visual sensitivity enhancement found for longer lasting visual stimuli across participants.

Conclusions/Significance

Our findings show that the duration of co-occurring sounds affects visual perception; it changes visual sensitivity in a similar way as altering the (actual) duration of the visual stimuli does.  相似文献   

11.

Objective

Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball.

Methods

Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude.

Results

Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy.

Conclusions

Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection.

Significance

Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population.  相似文献   

12.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

13.

Background

There is evidence that interventions aiming at modulation of the motor cortex activity lead to pain reduction. In order to understand further the role of the motor cortex on pain modulation, we aimed to compare the behavioral (pressure pain threshold) and neurophysiological effects (transcranial magnetic stimulation (TMS) induced cortical excitability) across three different motor tasks.

Methodology/Principal Findings

Fifteen healthy male subjects were enrolled in this randomized, controlled, blinded, cross-over designed study. Three different tasks were tested including motor learning with and without visual feedback, and simple hand movements. Cortical excitability was assessed using single and paired-pulse TMS measures such as resting motor threshold (RMT), motor-evoked potential (MEP), intracortical facilitation (ICF), short intracortical inhibition (SICI), and cortical silent period (CSP). All tasks showed significant reduction in pain perception represented by an increase in pressure pain threshold compared to the control condition (untrained hand). ANOVA indicated a difference among the three tasks regarding motor cortex excitability change. There was a significant increase in motor cortex excitability (as indexed by MEP increase and CSP shortening) for the simple hand movements.

Conclusions/Significance

Although different motor tasks involving motor learning with and without visual feedback and simple hand movements appear to change pain perception similarly, it is likely that the neural mechanisms might not be the same as evidenced by differential effects in motor cortex excitability induced by these tasks. In addition, TMS-indexed motor excitability measures are not likely good markers to index the effects of motor-based tasks on pain perception in healthy subjects as other neural networks besides primary motor cortex might be involved with pain modulation during motor training.  相似文献   

14.

Background

Barn owls integrate spatial information across frequency channels to localize sounds in space.

Methodology/Principal Findings

We presented barn owls with synchronous sounds that contained different bands of frequencies (3–5 kHz and 7–9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.

Conclusions/Significance

We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.  相似文献   

15.

Background

A paradoxical enhancement of the magnitude of the N1 wave of the auditory event-related potential (ERP) has been described when auditory stimuli are presented at very short (<400 ms) inter-stimulus intervals (ISI). Here, we examined whether this enhancement is specific for the auditory system, or whether it also affects ERPs elicited by stimuli belonging to other sensory modalities.

Methodology and Principal Findings

We recorded ERPs elicited by auditory and somatosensory stimuli in 13 healthy subjects. For each sensory modality, 4800 stimuli were presented. Auditory stimuli consisted in brief tones presented binaurally, and somatosensory stimuli consisted in constant-current electrical pulses applied to the right median nerve. Stimuli were delivered continuously, and the ISI was varied randomly between 100 and 1000 ms. We found that the ISI had a similar effect on both auditory and somatosensory ERPs. In both sensory modalities, ISI had an opposite effect on the magnitude of the N1 and P2 waves: the magnitude of the auditory and the somatosensory N1 was significantly increased at ISI≤200 ms, while the magnitude of the auditory and the somatosensory P2 was significantly decreased at ISI≤200 ms.

Conclusion and Significance

The observation that both the auditory and the somatosensory N1 are enhanced at short ISIs indicates that this phenomenon reflects a physiological property that is common across sensory systems, rather than, as previously suggested, unique for the auditory system. Two of the hypotheses most frequently put forward to explain this observation, namely (i) the decreased contribution of inhibitory postsynaptic potentials to the recorded scalp ERPs and (ii) the decreased contribution of ‘latent inhibition’, are discussed. Because neither of these two hypotheses can satisfactorily account for the concomitant reduction of the auditory and the somatosensory P2, we propose a third, novel hypothesis, consisting in the modulation of a single neural component contributing to both the N1 and the P2 waves.  相似文献   

16.

Background

Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

Methodology/Principal Findings

In the present study, we manipulated auditory feedback during speech production in a group of 9–11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

Conclusions

The results indicate that 9–11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children''s perceptual representations of speech sound categories.  相似文献   

17.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

18.

Background

Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown.

Methodology/Principal Findings

Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons.

Conclusions/Significance

These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.  相似文献   

19.

Background

A reduction of dopamine release or D2 receptor blockade in the terminal fields of the mesolimbic system clearly reduces conditioned fear. Injections of haloperidol, a preferential D2 receptor antagonist, into the inferior colliculus (IC) enhance the processing of unconditioned aversive information. However, a clear characterization of the interplay of D2 receptors in the mediation of unconditioned and conditioned fear is still lacking.

Methods

The present study investigated the effects of intra-IC injections of the D2 receptor-selective antagonist sulpiride on behavior in the elevated plus maze (EPM), auditory-evoked potentials (AEPs) to loud sounds recorded from the IC, fear-potentiated startle (FPS), and conditioned freezing.

Results

Intra-IC injections of sulpiride caused clear proaversive effects in the EPM and enhanced AEPs induced by loud auditory stimuli. Intra-IC sulpiride administration did not affect FPS or conditioned freezing.

Conclusions

Dopamine D2-like receptors of the inferior colliculus play a role in the modulation of unconditioned aversive information but not in the fear-potentiated startle response.  相似文献   

20.
M Schaefer  HJ Heinze  M Rotte 《PloS one》2012,7(8):e42308

Background

An increasing body of evidence has demonstrated that in contrast to the classic understanding the primary somatosensory cortex (SI) reflects merely seen touch (in the absence of any real touch on the own body). Based on these results it has been discussed that SI may play a role in understanding touch seen on other bodies. In order to further examine this understanding of observed touch, the current study aimed to test if mirror-like responses in SI are affected by the perspective of the seen touch. Thus, we presented touch on a hand and close to the hand either in first-person-perspective or in third-person-perspective.

Principal Findings

Results of functional magnetic resonance imaging (fMRI) revealed stronger vicarious brain responses in SI/BA2 for touch seen in first-person-perspective. Surprisingly, the third-person viewpoint revealed activation in SI both when subjects viewed a hand being stimulated as well as when the space close to the hand was being touched.

Conclusions/Significance

Based on these results we conclude that vicarious somatosensory responses in SI/BA2 are affected by the viewpoint of the seen hand. Furthermore, we argue that mirror-like responses in SI do not only reflect seen touch, but also the peripersonal space surrounding this body (in third-person-perspective). We discuss these findings with recent studies on mirror responses for action observation in peripersonal space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号