首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Mochida T  Gomi H  Kashino M 《PloS one》2010,5(11):e13866

Background

There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified.

Methodology/Principal Findings

This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested.

Conclusions/Significance

The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded.  相似文献   

2.
When we speak, we provide ourselves with auditory speech input. Efficient monitoring of speech is often hypothesized to depend on matching the predicted sensory consequences from internal motor commands (forward model) with actual sensory feedback. In this paper we tested the forward model hypothesis using functional Magnetic Resonance Imaging. We administered an overt picture naming task in which we parametrically reduced the quality of verbal feedback by noise masking. Presentation of the same auditory input in the absence of overt speech served as listening control condition. Our results suggest that a match between predicted and actual sensory feedback results in inhibition of cancellation of auditory activity because speaking with normal unmasked feedback reduced activity in the auditory cortex compared to listening control conditions. Moreover, during self-generated speech, activation in auditory cortex increased as the feedback quality of the self-generated speech decreased. We conclude that during speaking early auditory cortex is involved in matching external signals with an internally generated model or prediction of sensory consequences, the locus of which may reside in auditory or higher order brain areas. Matching at early auditory cortex may provide a very sensitive monitoring mechanism that highlights speech production errors at very early levels of processing and may efficiently determine the self-agency of speech input.  相似文献   

3.
Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

Hearing one’s own voice is critical for fluent speech production, allowing detection and correction of vocalization errors in real-time. This study shows that the dorsal precentral gyrus is a critical component of a cortical network that monitors auditory feedback to produce fluent speech; this region is engaged specifically when speech production is effortful during articulation of long utterances.  相似文献   

4.
Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (~150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.  相似文献   

5.
Abstract

Along with visual feedback, somatosensory feedback provides the nervous system with information regarding movement performance. Somatosensory system damage disrupts the normal feedback process, which can lead to a pins and needles sensation, or paresthaesia, and impaired movement control. The present study assessed the impact of temporarily induced median nerve paresthaesia, in individuals with otherwise intact sensorimotor function, on goal-directed reaching and grasping movements. Healthy, right-handed participants performed reach and grasp movements to five wooden Efron shapes, of which three were selected for analysis. Participants performed the task without online visual feedback and in two somatosensory conditions: 1) normal; and 2) disrupted somatosensory feedback. Disrupted somatosensory feedback was induced temporarily using a Digitimer (DS7AH) constant current stimulator. Participants’ movements to shapes 15 or 30?cm to the right of the hand’s start position were recorded using a 3?D motion analysis system at 300?Hz (Optotrak 3?D Investigator). Analyses revealed no significant differences for reaction time. Main effects for paresthaesia were observed for temporal and spatial aspects of the both the reach and grasp components of the movements. Although participants scaled their grip aperture to shape size under paresthaesia, the movements were smaller and more variable. Overall participants behaved as though they perceived they were performing larger and faster movements than they actually were. We suggest the presence of temporally induced paresthaesia affected online control by disrupting somatosensory feedback of the reach and grasp movements, ultimately leading to smaller forces and fewer corrective movements.  相似文献   

6.
Evidence regarding visually guided limb movements suggests that the motor system learns and maintains neural maps between motor commands and sensory feedback. Such systems are hypothesized to be used in a feed-forward control strategy that permits precision and stability without the delays of direct feedback control. Human vocalizations involve precise control over vocal and respiratory muscles. However, little is known about the sensorimotor representations underlying speech production. Here, we manipulated the heard fundamental frequency of the voice during speech to demonstrate learning of auditory-motor maps. Mandarin speakers repeatedly produced words with specific pitch patterns (tone categories). On each successive utterance, the frequency of their auditory feedback was increased by 1/100 of a semitone until they heard their feedback one full semitone above their true pitch. Subjects automatically compensated for these changes by lowering their vocal pitch. When feedback was unexpectedly returned to normal, speakers significantly increased the pitch of their productions beyond their initial baseline frequency. This adaptation was found to generalize to the production of another tone category. However, results indicate that a more robust adaptation was produced for the tone that was spoken during feedback alteration. The immediate aftereffects suggest a global remapping of the auditory-motor relationship after an extremely brief training period. However, this learning does not represent a complete transformation of the mapping; rather, it is in part target dependent.  相似文献   

7.

Background

Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

Methodology/Principal Findings

In the present study, we manipulated auditory feedback during speech production in a group of 9–11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

Conclusions

The results indicate that 9–11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children''s perceptual representations of speech sound categories.  相似文献   

8.
The potential role of a size-scaling principle in orofacial movements for speech was examined by using between-group (adults vs. 5-yr-old children) as well as within-group correlational analyses. Movements of the lower lip and jaw were recorded during speech production, and anthropometric measures of orofacial structures were made. Adult women produced speech movements of equal amplitude and velocity to those of adult men. The children produced speech movement amplitudes equal to those of adults, but they had significantly lower peak velocities of orofacial movement. Thus we found no evidence supporting a size-scaling principle for orofacial speech movements. Young children have a relatively large-amplitude, low-velocity movement strategy for speech production compared with young adults. This strategy may reflect the need for more time to plan speech movement sequences and an increased reliance on sensory feedback as young children develop speech motor control processes.  相似文献   

9.
Experimental manipulations of sensory feedback during complex behavior have provided valuable insights into the computations underlying motor control and sensorimotor plasticity1. Consistent sensory perturbations result in compensatory changes in motor output, reflecting changes in feedforward motor control that reduce the experienced feedback error. By quantifying how different sensory feedback errors affect human behavior, prior studies have explored how visual signals are used to recalibrate arm movements2,3 and auditory feedback is used to modify speech production4-7. The strength of this approach rests on the ability to mimic naturalistic errors in behavior, allowing the experimenter to observe how experienced errors in production are used to recalibrate motor output.Songbirds provide an excellent animal model for investigating the neural basis of sensorimotor control and plasticity8,9. The songbird brain provides a well-defined circuit in which the areas necessary for song learning are spatially separated from those required for song production, and neural recording and lesion studies have made significant advances in understanding how different brain areas contribute to vocal behavior9-12. However, the lack of a naturalistic error-correction paradigm - in which a known acoustic parameter is perturbed by the experimenter and then corrected by the songbird - has made it difficult to understand the computations underlying vocal learning or how different elements of the neural circuit contribute to the correction of vocal errors13.The technique described here gives the experimenter precise control over auditory feedback errors in singing birds, allowing the introduction of arbitrary sensory errors that can be used to drive vocal learning. Online sound-processing equipment is used to introduce a known perturbation to the acoustics of song, and a miniaturized headphones apparatus is used to replace a songbird''s natural auditory feedback with the perturbed signal in real time. We have used this paradigm to perturb the fundamental frequency (pitch) of auditory feedback in adult songbirds, providing the first demonstration that adult birds maintain vocal performance using error correction14. The present protocol can be used to implement a wide range of sensory feedback perturbations (including but not limited to pitch shifts) to investigate the computational and neurophysiological basis of vocal learning.  相似文献   

10.
We describe an illusion in which a stranger's voice, when presented as the auditory concomitant of a participant's own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a stranger's voice induced robust changes in the fundamental frequency (F0) of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participant's own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent.  相似文献   

11.
Seeing the articulatory gestures of the speaker (“speech reading”) enhances speech perception especially in noisy conditions. Recent neuroimaging studies tentatively suggest that speech reading activates speech motor system, which then influences superior-posterior temporal lobe auditory areas via an efference copy. Here, nineteen healthy volunteers were presented with silent videoclips of a person articulating Finnish vowels /a/, /i/ (non-targets), and /o/ (targets) during event-related functional magnetic resonance imaging (fMRI). Speech reading significantly activated visual cortex, posterior fusiform gyrus (pFG), posterior superior temporal gyrus and sulcus (pSTG/S), and the speech motor areas, including premotor cortex, parts of the inferior (IFG) and middle (MFG) frontal gyri extending into frontal polar (FP) structures, somatosensory areas, and supramarginal gyrus (SMG). Structural equation modelling (SEM) of these data suggested that information flows first from extrastriate visual cortex to pFS, and from there, in parallel, to pSTG/S and MFG/FP. From pSTG/S information flow continues to IFG or SMG and eventually somatosensory areas. Feedback connectivity was estimated to run from MFG/FP to IFG, and pSTG/S. The direct functional connection from pFG to MFG/FP and feedback connection from MFG/FP to pSTG/S and IFG support the hypothesis of prefrontal speech motor areas influencing auditory speech processing in pSTG/S via an efference copy.  相似文献   

12.
Snakes are frequently described in both popular and technical literature as either deaf or able to perceive only groundborne vibrations. Physiological studies have shown that snakes are actually most sensitive to airborne vibrations. Snakes are able to detect both airborne and groundborne vibrations using their body surface (termed somatic hearing) as well as from their inner ears. The central auditory pathways for these two modes of "hearing" remain unknown. Recent experimental evidence has shown that snakes can respond behaviorally to both airborne and groundborne vibrations. The ability of snakes to contextualize the sounds and respond with consistent predatory or defensive behaviors suggests that auditory stimuli may play a larger role in the behavioral ecology of snakes than was previously realized. Snakes produce sounds in a variety of ways, and there appear to be multiple acoustic Batesian mimicry complexes among snakes. Analyses of the proclivity for sound production and the acoustics of the sounds produced within a habitat or phylogeny specific context may provide insights into the behavioral ecology of snakes. The relatively low information content in the sounds produced by snakes suggests that these sounds are not suitable for intraspecific communication. Nevertheless, given the diversity of habitats in which snakes are found, and their dual auditory pathways, some form of intraspecific acoustic communication may exist in some species.  相似文献   

13.
The study of the production of co-speech gestures (CSGs), i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15–25 Hz), which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50–90 Hz), which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.  相似文献   

14.
Noninformative vision improves haptic spatial perception   总被引:10,自引:0,他引:10  
Previous studies have attempted to map somatosensory space via haptic matching tasks and have shown that individuals make large and systematic matching errors, the magnitude and angular direction of which vary systematically through the workspace. Based upon such demonstrations, it has been suggested that haptic space is non-Euclidian. This conclusion assumes that spatial perception is modality specific, and it largely ignores the fact that tactile matching tasks involve active, exploratory arm movements. Here we demonstrate that, when individuals match two bar stimuli (i.e., make them parallel) in circumstances favoring extrinsic (visual) coordinates, providing noninformative visual information significantly increases the accuracy of haptic perception. In contrast, when individuals match the same bar stimuli in circumstances favoring the coding of movements in intrinsic (limb-based) coordinates, providing identical noninformative visual information either has no effect or leads to the decreased accuracy of haptic perception. These results are consistent with optimal integration models of sensory integration in which the weighting given to visual and somatosensory signals depends upon the precision of the visual and somatosensory information and provide important evidence for the task-dependent integration of visual and somatosensory signals during the construction of a representation of peripersonal space.  相似文献   

15.
American water shrews (Sorex palustris) are aggressive predators that dive into streams and ponds to find prey at night. They do not use eyesight for capturing fish or for discriminating shapes. Instead they make use of vibrissae to detect and attack water movements generated by active prey and to detect the form of stationary prey. Tactile investigations are supplemented with underwater sniffing. This remarkable behavior consists of exhalation of air bubbles that spread onto objects and are then re-inhaled. Recordings for ultrasound both above and below water provide no evidence for echolocation or sonar, and presentation of electric fields and anatomical investigations provide no evidence for electroreception. Counts of myelinated fibers show by far the largest volume of sensory information comes from the trigeminal nerve compared to optic and cochlear nerves. This is in turn reflected in the organization of the water shrew’s neocortex, which contains two large somatosensory areas and much smaller visual and auditory areas. The shrew’s small brain with few cortical areas may allow exceptional speed in processing sensory information and producing motor output. Water shrews can accurately attack the source of a water disturbance in only 50 ms, perhaps outpacing any other mammalian predator.  相似文献   

16.
The posterior inner perisylvian region including the secondary somatosensory cortex (area SII) and the adjacent region of posterior insular cortex (pIC) has been implicated in haptic processing by integrating somato-motor information during hand-manipulation, both in humans and in non-human primates. However, motor-related properties during hand-manipulation are still largely unknown. To investigate a motor-related activity in the hand region of SII/pIC, two macaque monkeys were trained to perform a hand-manipulation task, requiring 3 different grip types (precision grip, finger exploration, side grip) both in light and in dark conditions. Our results showed that 70% (n = 33/48) of task related neurons within SII/pIC were only activated during monkeys’ active hand-manipulation. Of those 33 neurons, 15 (45%) began to discharge before hand-target contact, while the remaining neurons were tonically active after contact. Thirty-percent (n = 15/48) of studied neurons responded to both passive somatosensory stimulation and to the motor task. A consistent percentage of task-related neurons in SII/pIC was selectively activated during finger exploration (FE) and precision grasping (PG) execution, suggesting they play a pivotal role in control skilled finger movements. Furthermore, hand-manipulation-related neurons also responded when visual feedback was absent in the dark. Altogether, our results suggest that somato-motor neurons in SII/pIC likely contribute to haptic processing from the initial to the final phase of grasping and object manipulation. Such motor-related activity could also provide the somato-motor binding principle enabling the translation of diachronic somatosensory inputs into a coherent image of the explored object.  相似文献   

17.
Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.  相似文献   

18.
Species-specific vocalizations fall into two broad categories: those that emerge during maturation, independent of experience, and those that depend on early life interactions with conspecifics. Human language and the communication systems of a small number of other species, including songbirds, fall into this latter class of vocal learning. Self-monitoring has been assumed to play an important role in the vocal learning of speech and studies demonstrate that perception of your own voice is crucial for both the development and lifelong maintenance of vocalizations in humans and songbirds. Experimental modifications of auditory feedback can also change vocalizations in both humans and songbirds. However, with the exception of large manipulations of timing, no study to date has ever directly examined the use of auditory feedback in speech production under the age of 4. Here we use a real-time formant perturbation task to compare the response of toddlers, children, and adults to altered feedback. Children and adults reacted to this manipulation by changing their vowels in a direction opposite to the perturbation. Surprisingly, toddlers' speech didn't change in response to altered feedback, suggesting that long-held assumptions regarding the role of self-perception in articulatory development need to be reconsidered.  相似文献   

19.
Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants’ attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.  相似文献   

20.
Tschida KA  Mooney R 《Neuron》2012,73(5):1028-1039
Hearing loss prevents vocal learning and causes learned vocalizations to deteriorate, but how vocalization-related auditory feedback acts on neural circuits that control vocalization remains poorly understood. We deafened adult zebra finches, which rely on auditory feedback to maintain their learned songs, to test the hypothesis that deafening modifies synapses on neurons in a sensorimotor nucleus important to song production. Longitudinal in vivo imaging revealed that deafening selectively decreased the size and stability of dendritic spines on neurons that provide input to a striatothalamic pathway important to audition-dependent vocal plasticity, and changes in spine size preceded and predicted subsequent vocal degradation. Moreover, electrophysiological recordings from these neurons showed that structural changes were accompanied by functional weakening of both excitatory and inhibitory synapses, increased intrinsic excitability, and changes in spontaneous action potential output. These findings shed light on where and how auditory feedback acts within sensorimotor circuits to shape learned vocalizations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号