首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
In the present study, we used transcranial magnetic stimulation (TMS) to investigate the influence of phonological and lexical properties of verbal items on the excitability of the tongue's cortical motor representation during passive listening. In particular, we aimed to clarify if the difference in tongue motor excitability found during listening to words and pseudo-words [Fadiga, L., Craighero, L., Buccino, G., Rizzolatti, G., 2002. Speech listening specifically modulates the excitability of tongue muscles: a TMS study. European Journal of Neuroscience 15, 399-402] is due to lexical frequency or to the presence of a meaning per se. In order to do this, we investigated the time-course of tongue motor-evoked potentials (MEPs) during listening to frequent words, rare words, and pseudo-words embedded with a double consonant requiring relevant tongue movements for its pronunciation. Results showed that at the later stimulation intervals (200 and 300 ms from the double consonant) listening to rare words evoked much larger MEPs than listening to frequent words. Moreover, by comparing pseudo-words embedded with a double consonant requiring or not tongue movements, we found that a pure phonological motor resonance was present only 100 ms after the double consonant. Thus, while the phonological motor resonance appears very early, the lexical-dependent motor facilitation takes more time to appear and depends on the frequency of the stimuli. The present results indicate that the motor system responsible for phonoarticulatory movements during speech production is also involved during speech listening in a strictly specific way. This motor facilitation reflects both the difference in the phonoarticulatory characteristics and the difference in the frequency of occurrence of the verbal material.  相似文献   

2.
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.  相似文献   

3.
This study used the transcranial magnetic stimulation/motor evoked potential (TMS/MEP) technique to pinpoint when the automatic tendency to mirror someone else''s action becomes anticipatory simulation of a complementary act. TMS was delivered to the left primary motor cortex corresponding to the hand to induce the highest level of MEP activity from the abductor digiti minimi (ADM; the muscle serving little finger abduction) as well as the first dorsal interosseus (FDI; the muscle serving index finger flexion/extension) muscles. A neuronavigation system was used to maintain the position of the TMS coil, and electromyographic (EMG) activity was recorded from the right ADM and FDI muscles. Producing original data with regard to motor resonance, the combined TMS/MEP technique has taken research on the perception-action coupling mechanism a step further. Specifically, it has answered the questions of how and when observing another person''s actions produces motor facilitation in an onlooker''s corresponding muscles and in what way corticospinal excitability is modulated in social contexts.  相似文献   

4.
The complicated muscle activity of the human tongue and the resultant surface shapes can give us important clues about speech motor control and pathological tongue motion. This study uses tagged magnetic resonance imaging to provide a 2D surface deformation analysis of the tongue, as well as a 4D compression–expansion analysis, during utterances of four different syllables (/ba/, /ta/, /sha/ and /ga/). All speech tasks were performed several times to confirm the repeatability of the motion analysis. The results showed that the tongue has unique motion patterns for utterances of different syllables, and these differences, which may not be observed by a simple surface analysis, can be examined thoroughly by a 4D motion model-based analysis of the tongue muscles.  相似文献   

5.
Certain regions of the human brain are activated both during action execution and action observation. This so-called ‘mirror neuron system’ has been proposed to enable an observer to understand an action through a process of internal motor simulation. Although there has been much speculation about the existence of such a system from early in life, to date there is little direct evidence that young infants recruit brain areas involved in action production during action observation. To address this question, we identified the individual frequency range in which sensorimotor alpha-band activity was attenuated in nine-month-old infants'' electroencephalographs (EEGs) during elicited reaching for objects, and measured whether activity in this frequency range was also modulated by observing others'' actions. We found that observing a grasping action resulted in motor activation in the infant brain, but that this activity began prior to observation of the action, once it could be anticipated. These results demonstrate not only that infants, like adults, display overlapping neural activity during execution and observation of actions, but that this activation, rather than being directly induced by the visual input, is driven by infants'' understanding of a forthcoming action. These results provide support for theories implicating the motor system in action prediction.  相似文献   

6.
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects'' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance''s difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production.  相似文献   

7.
Among topics related to the evolution of language, the evolution of speech is particularly fascinating. Early theorists believed that it was the ability to produce articulate speech that set the stage for the evolution of the «special» speech processing abilities that exist in modern-day humans. Prior to the evolution of speech production, speech processing abilities were presumed not to exist. The data reviewed here support a different view. Two lines of evidence, one from young human infants and the other from infrahuman species, neither of whom can produce articulate speech, show that in the absence of speech production capabilities, the perception of speech sounds is robust and sophisticated. Human infants and non-human animals evidence auditory perceptual categories that conform to those defined by the phonetic categories of language. These findings suggest the possibility that in evolutionary history the ability to perceive rudimentary speech categories preceded the ability to produce articulate speech. This in turn suggests that it may be audition that structured, at least initially, the formation of phonetic categories.  相似文献   

8.

Background

The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion.

Methodology/Principal Findings

We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else''s body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms.

Conclusions/Significance

The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”.  相似文献   

9.
Cortical processing associated with orofacial somatosensory function in speech has received limited experimental attention due to the difficulty of providing precise and controlled stimulation. This article introduces a technique for recording somatosensory event-related potentials (ERP) that uses a novel mechanical stimulation method involving skin deformation using a robotic device. Controlled deformation of the facial skin is used to modulate kinesthetic inputs through excitation of cutaneous mechanoreceptors. By combining somatosensory stimulation with electroencephalographic recording, somatosensory evoked responses can be successfully measured at the level of the cortex. Somatosensory stimulation can be combined with the stimulation of other sensory modalities to assess multisensory interactions. For speech, orofacial stimulation is combined with speech sound stimulation to assess the contribution of multi-sensory processing including the effects of timing differences. The ability to precisely control orofacial somatosensory stimulation during speech perception and speech production with ERP recording is an important tool that provides new insight into the neural organization and neural representations for speech.  相似文献   

10.
Gentner R  Classen J 《Neuron》2006,52(4):731-742
The motor system may generate automated movements, such as walking, by combining modular spinal motor synergies. However, it remains unknown whether a modular neuronal architecture is sufficient to generate the unique flexibility of human finger movements, which rely on cortical structures. Here we show that finger movements evoked by transcranial magnetic stimulation (TMS) of the primary motor cortex reproduced distinctive features of the spatial representation of voluntary movements as identified in previous neuroimaging studies, consistent with naturalistic activation of neuronal elements. Principal component analysis revealed that the dimensionality of TMS-evoked movements was low. Principal components extracted from TMS-induced finger movements resembled those derived from end-postures of voluntary movements performed to grasp imagined objects, and a small subset of them was sufficient to reconstruct these movements with remarkable fidelity. The motor system may coordinate even the most dexterous movements by using a modular architecture involving cortical components.  相似文献   

11.
It has been suggested that social impairments observed in individuals with autism spectrum disorder (ASD) can be partly explained by an abnormal mirror neuron system (MNS) 1., 2.. Studies on monkeys have shown that mirror neurons are cells in premotor area F5 that discharge when a monkey executes or sees a specific action or when it hears the corresponding action-related sound 3., 4., 5.. Evidence for the presence of a MNS in humans comes in part from studies using transcranial magnetic stimulation (TMS), where a change in the amplitude of the TMS-induced motor-evoked potentials (MEPs) during action observation has been demonstrated 6., 7., 8., 9.. These data suggest that actions are understood when the representation of that action is mapped onto the observer's own motor structures [10]. To determine if the neural mechanism matching action observation and execution is anomalous in individuals with ASD, TMS was applied over the primary motor cortex (M1) during observation of intransitive, meaningless finger movements. We show that overall modulation of M1 excitability during action observation is significantly lower in individuals with ASD compared with matched controls. In addition, we find that basic motor cortex abnormalities do not underlie this impairment.  相似文献   

12.
Repetitive mirror symmetric bilateral upper limb may be a suitable priming technique for upper limb rehabilitation after stroke. Here we demonstrate neurophysiological and behavioural after-effects in healthy participants after priming with 20 minutes of repetitive active-passive bimanual wrist flexion and extension in a mirror symmetric pattern with respect to the body midline (MIR) compared to an control priming condition with alternating flexion-extension (ALT). Transcranial magnetic stimulation (TMS) indicated that corticomotor excitability (CME) of the passive hemisphere remained elevated compared to baseline for at least 30 minutes after MIR but not ALT, evidenced by an increase in the size of motor evoked potentials in ECR and FCR. Short and long-latency intracortical inhibition (SICI, LICI), short afferent inhibition (SAI) and interhemispheric inhibition (IHI) were also examined using pairs of stimuli. LICI differed between patterns, with less LICI after MIR compared with ALT, and an effect of pattern on IHI, with reduced IHI in passive FCR 15 minutes after MIR compared with ALT and baseline. There was no effect of pattern on SAI or FCR H-reflex. Similarly, SICI remained unchanged after 20 minutes of MIR. We then had participants complete a timed manual dexterity motor learning task with the passive hand during, immediately after, and 24 hours after MIR or control priming. The rate of task completion was faster with MIR priming compared to control conditions. Finally, ECR and FCR MEPs were examined within a pre-movement facilitation paradigm of wrist extension before and after MIR. ECR, but not FCR, MEPs were consistently facilitated before and after MIR, demonstrating no degradation of selective muscle activation. In summary, mirror symmetric active-passive bimanual movement increases CME and can enhance motor learning without degradation of muscle selectivity. These findings rationalise the use of mirror symmetric bimanual movement as a priming modality in post-stroke upper limb rehabilitation.  相似文献   

13.
Although the acoustic variability of speech is often described as a problem for phonetic recognition, there is little research examining acoustic-phonetic variability over time. We measured naturally occurring acoustic variability in speech production at nine specific time points (three per day over three days) to examine daily change in production as well as change across days for citation-form vowels. Productions of seven different vowels (/EE/, /IH/, /AH/, /UH/, /AE/, /OO/, /EH/) were recorded at 9AM, 3PM and 9PM over the course of each testing day on three different days, every other day, over a span of five days. Results indicate significant systematic change in F1 and F0 values over the course of a day for each of the seven vowels recorded, whereas F2 and F3 remained stable. Despite this systematic change within a day, however, talkers did not show significant changes in F0, F1, F2, and F3 between days, demonstrating that speakers are capable of producing vowels with great reliability over days without any extrinsic feedback besides their own auditory monitoring. The data show that in spite of substantial day-to-day variability in the specific listening and speaking experiences of these participants and thus exposure to different acoustic tokens of speech, there is a high degree of internal precision and consistency for the production of citation form vowels.  相似文献   

14.
Transcranial Magnetic Stimulation (TMS) is an effective method for establishing a causal link between a cortical area and cognitive/neurophysiological effects. Specifically, by creating a transient interference with the normal activity of a target region and measuring changes in an electrophysiological signal, we can establish a causal link between the stimulated brain area or network and the electrophysiological signal that we record. If target brain areas are functionally defined with prior fMRI scan, TMS could be used to link the fMRI activations with evoked potentials recorded. However, conducting such experiments presents significant technical challenges given the high amplitude artifacts introduced into the EEG signal by the magnetic pulse, and the difficulty to successfully target areas that were functionally defined by fMRI. Here we describe a methodology for combining these three common tools: TMS, EEG, and fMRI. We explain how to guide the stimulator''s coil to the desired target area using anatomical or functional MRI data, how to record EEG during concurrent TMS, how to design an ERP study suitable for EEG-TMS combination and how to extract reliable ERP from the recorded data. We will provide representative results from a previously published study, in which fMRI-guided TMS was used concurrently with EEG to show that the face-selective N1 and the body-selective N1 component of the ERP are associated with distinct neural networks in extrastriate cortex. This method allows us to combine the high spatial resolution of fMRI with the high temporal resolution of TMS and EEG and therefore obtain a comprehensive understanding of the neural basis of various cognitive processes.  相似文献   

15.
Beat gestures—spontaneously produced biphasic movements of the hand—are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world''s languages, how beat gestures impact spoken word recognition is unclear. Can these simple ‘flicks of the hand'' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.  相似文献   

16.
17.
Abstract

We aimed to investigate whether motor learning induces different excitability changes in the human motor cortex (M1) between two different muscle contraction states (before voluntary contraction [static] or during voluntary contraction [dynamic]). For the same, using motor evoked potentials (MEPs) obtained by transcranial magnetic stimulation (TMS), we compared excitability changes during these two states after pinch-grip motor skill learning. The participants performed a force output tracking task by pinch grip on a computer screen. TMS was applied prior to the pinch grip (static) and after initiation of voluntary contraction (dynamic). MEPs of the following muscles were recorded: first dorsal interosseous (FDI), thenar muscle (Thenar), flexor carpi radialis (FCR), and extensor carpi radialis (ECR) muscles. During both the states, motor skill training led to significant improvement of motor performance. During the static state, MEPs of the FDI muscle were significantly facilitated after motor learning; however, during the dynamic state, MEPs of the FDI, Thenar, and FCR muscles were significantly decreased. Based on the results of this study, we concluded that excitability changes in the human M1 are differentially influenced during different voluntary contraction states (static and dynamic) after motor learning.  相似文献   

18.
The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one’s cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.  相似文献   

19.
Désy MC  Théoret H 《PloS one》2007,2(10):e971
The passive observation of hand actions is associated with increased motor cortex excitability, presumably reflecting activity within the human mirror neuron system (MNS). Recent data show that in-group ethnic membership increases motor cortex excitability during observation of culturally relevant hand gestures, suggesting that physical similarity with an observed body part may modulate MNS responses. Here, we ask whether the MNS is preferentially activated by passive observation of hand actions that are similar or dissimilar to self in terms of sex and skin color. Transcranial magnetic stimulation-induced motor evoked potentials were recorded from the first dorsal interosseus muscle while participants viewed videos depicting index finger movements made by female or male participants with black or white skin color. Forty-eight participants equally distributed in terms of sex and skin color participated in the study. Results show an interaction between self-attributes and physical attributes of the observed hand in the right motor cortex of female participants, where corticospinal excitability is increased during observation of hand actions in a different skin color than that of the observer. Our data show that specific physical properties of an observed action modulate motor cortex excitability and we hypothesize that in-group/out-group membership and self-related processes underlie these effects.  相似文献   

20.
Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory–vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory–vocal mirror neurons in a sensorimotor region of the songbird''s brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory–vocal mirroring in the songbird''s brain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号