首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Prelingually deafened children with cochlear implants stand a good chance of developing satisfactory speech performance. Nevertheless, their eventual language performance is highly variable and not fully explainable by the duration of deafness and hearing experience. In this study, two groups of cochlear implant users (CI groups) with very good basic hearing abilities but non-overlapping speech performance (very good or very bad speech performance) were matched according to hearing age and age at implantation. We assessed whether these CI groups differed with regard to their phoneme discrimination ability and auditory sensory memory capacity, as suggested by earlier studies. These functions were measured behaviorally and with the Mismatch Negativity (MMN). Phoneme discrimination ability was comparable in the CI group of good performers and matched healthy controls, which were both better than the bad performers. Source analyses revealed larger MMN activity (155–225 ms) in good than in bad performers, which was generated in the frontal cortex and positively correlated with measures of working memory. For the bad performers, this was followed by an increased activation of left temporal regions from 225 to 250 ms with a focus on the auditory cortex. These results indicate that the two CI groups developed different auditory speech processing strategies and stress the role of phonological functions of auditory sensory memory and the prefrontal cortex in positively developing speech perception and production.  相似文献   

2.
Althen H  Grimm S  Escera C 《PloS one》2011,6(12):e28522
The detection of deviant sounds is a crucial function of the auditory system and is reflected by the automatically elicited mismatch negativity (MMN), an auditory evoked potential at 100 to 250 ms from stimulus onset. It has recently been shown that rarely occurring frequency and location deviants in an oddball paradigm trigger a more negative response than standard sounds at very early latencies in the middle latency response of the human auditory evoked potential. This fast and early ability of the auditory system is corroborated by the finding of neurons in the animal auditory cortex and subcortical structures, which restore their adapted responsiveness to standard sounds, when a rare change in a sound feature occurs. In this study, we investigated whether the detection of intensity deviants is also reflected at shorter latencies than those of the MMN. Auditory evoked potentials in response to click sounds were analyzed regarding the auditory brain stem response, the middle latency response (MLR) and the MMN. Rare stimuli with a lower intensity level than standard stimuli elicited (in addition to an MMN) a more negative potential in the MLR at the transition from the Na to the Pa component at circa 24 ms from stimulus onset. This finding, together with the studies about frequency and location changes, suggests that the early automatic detection of deviant sounds in an oddball paradigm is a general property of the auditory system.  相似文献   

3.
Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with `streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.  相似文献   

4.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound''s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.  相似文献   

5.
Auditory event-related potentials (ERP) were registered to the dichotically presented white noise stimuli (duration 1500 ms, band 150-1200 Hz). Abrupt or gradual change ofinteraural time difference in the middle of stimuli (750 ms after sound offset) was perceived as an apparent auditory image (AI) instant relocation or motion from the midline to one of the ears. In responses these stimuli two ERPs were observed: one to the sound onset, and second--to the onset of motion or AI relocation. ERPs to AI relocation differed from those to sound onset in longer components latencies (123 ms versus 105 ms for N 1,227 ms versus 190 ms for P2). In responses to AI motion component latencies were even longer (N1: 137 ms, P2: 240 ms); N1 amplitude was greater at sites contralateral to the AI motion direction.  相似文献   

6.
M Cornella  S Leung  S Grimm  C Escera 《PloS one》2012,7(8):e43604
Auditory deviance detection in humans is indexed by the mismatch negativity (MMN), a component of the auditory evoked potential (AEP) of the electroencephalogram (EEG) occurring at a latency of 100-250 ms after stimulus onset. However, by using classic oddball paradigms, differential responses to regularity violations of simple auditory features have been found at the level of the middle latency response (MLR) of the AEP occurring within the first 50 ms after stimulus (deviation) onset. These findings suggest the existence of fast deviance detection mechanisms for simple feature changes, but it is not clear whether deviance detection among more complex acoustic regularities could be observed at such early latencies. To test this, we examined the pre-attentive processing of rare stimulus repetitions in a sequence of tones alternating in frequency in both long and middle latency ranges. Additionally, we introduced occasional changes in the interaural time difference (ITD), so that a simple-feature regularity could be examined in the same paradigm. MMN was obtained for both repetition and ITD deviants, occurring at 150 ms and 100 ms after stimulus onset respectively. At the level of the MLR, a difference was observed between standards and ITD deviants at the Na component (20-30 ms after stimulus onset), for 800 Hz tones, but not for repetition deviants. These findings suggest that detection mechanisms for deviants to simple regularities, but not to more complex regularities, are already activated in the MLR range, supporting the view that the auditory deviance detection system is organized in a hierarchical manner.  相似文献   

7.
Growing evidence indicates that syntax and semantics are basic aspects of music. After the onset of a chord, initial music-syntactic processing can be observed at about 150-400 ms and processing of musical semantics at about 300-500 ms. Processing of musical syntax activates inferior frontolateral cortex, ventrolateral premotor cortex and presumably the anterior part of the superior temporal gyrus. These brain structures have been implicated in sequencing of complex auditory information, identification of structural relationships, and serial prediction. Processing of musical semantics appears to activate posterior temporal regions. The processes and brain structures involved in the perception of syntax and semantics in music have considerable overlap with those involved in language perception, underlining intimate links between music and language in the human brain.  相似文献   

8.
Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT’s at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy’s “Gangnam Style” in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied familiarity with dance choreography may facilitate meter awareness. Results shed light on the processing of multimedia environments.  相似文献   

9.
The objective and subjective indexes of sound stimulus discrimination have been studied in order to get insight into individual stages of signal processing in the human brain. The experiment employed two methods: electrophysiological (mismatch negativity or MMN recording) and psychophysical (two-alternative forced choice). Two types of spatial sound stimuli simulated gradual and abrupt sound motion from the head midline. The subjective discrimination between the gradual and abrupt motions was estimated as a function of the stimulus trajectory length. MMN as an objective index of spatial discrimination has been obtained in response to the subthreshold and the suprathreshold levels of psychophysical discrimination. An increase in the angular displacement of the moving stimuli resulted in an increase in both the MMN amplitude and the subjective discrimination, although their correlation remained below the significance level. The results obtained are discussed from the point of view of preconscious perception of auditory spatial information.  相似文献   

10.
The present article outlines the contribution of the mismatch negativity (MMN), and its magnetic equivalent MMNm, to our understanding of the perception of speech sounds in the human brain. MMN data indicate that each sound, both speech and non-speech, develops its neural representation corresponding to the percept of this sound in the neurophysiological substrate of auditory sensory memory. The accuracy of this representation, determining the accuracy of the discrimination between different sounds, can be probed with MMN separately for any auditory feature or stimulus type such as phonemes. Furthermore, MMN data show that the perception of phonemes, and probably also of larger linguistic units (syllables and words), is based on language-specific phonetic traces developed in the posterior part of the left-hemisphere auditory cortex. These traces serve as recognition models for the corresponding speech sounds in listening to speech.  相似文献   

11.
The perception of a regular beat is fundamental to music processing. Here we examine whether the detection of a regular beat is pre-attentive for metrically simple, acoustically varying stimuli using the mismatch negativity (MMN), an ERP response elicited by violations of acoustic regularity irrespective of whether subjects are attending to the stimuli. Both musicians and non-musicians were presented with a varying rhythm with a clear accent structure in which occasionally a sound was omitted. We compared the MMN response to the omission of identical sounds in different metrical positions. Most importantly, we found that omissions in strong metrical positions, on the beat, elicited higher amplitude MMN responses than omissions in weak metrical positions, not on the beat. This suggests that the detection of a beat is pre-attentive when highly beat inducing stimuli are used. No effects of musical expertise were found. Our results suggest that for metrically simple rhythms with clear accents beat processing does not require attention or musical expertise. In addition, we discuss how the use of acoustically varying stimuli may influence ERP results when studying beat processing.  相似文献   

12.
Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs), specifically mismatch negativity (MMN) responses, to paired tones (standard 100–100Hz; deviant 100–300Hz) separated by a 300, 70 or 10ms silent gap (ISI) were recorded under Ignore and Attend conditions in 21 adults and 23 children (6–11 years old). In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R) were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard): an early peak (eMMN) at about 100–300ms indexing sensory processing, and a later peak (LDN), at about 400–600ms, thought to reflect reorientation to the deviant stimuli or “second-look” processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens’ rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children’s responses to rapid successive auditory stimulation requires an examination of both peaks.  相似文献   

13.
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.  相似文献   

14.

Background

The Mismatch Negativity (MMN) is an event-related potential (ERP) sensitive to early auditory deviance detection and has been shown to be reduced in schizophrenia patients. Moreover, MMN amplitude reduction to duration deviant tones was found to be related to functional outcomes particularly, to neuropsychological (working memory and verbal domains) and psychosocial measures. While MMN amplitude is thought to be correlated with deficits of early sensory processing, the functional significance of MMN latency remains unclear so far. The present study focused on the investigation of MMN in relation to neuropsychological function in schizophrenia.

Method

Forty schizophrenia patients and 16 healthy controls underwent a passive oddball paradigm (2400 binaural tones; 88% standards [1 kHz, 80 db, 80 ms], 11% frequency deviants [1.2 kHz], 11% duration deviants [40 ms]) and a neuropsychological test-battery. Patients were assessed with regard to clinical symptoms.

Results

Compared to healthy controls schizophrenia patients showed diminished MMN amplitude and shorter MMN latency to both deviants as well as an impaired neuropsychological test performance. Severity of positive symptoms was related to decreased MMN amplitude to duration deviants. Furthermore, enhanced verbal memory performance was associated with prolonged MMN latency to frequency deviants in patients.

Conclusion

The present study corroborates previous results of a diminished MMN amplitude and its association with positive symptoms in schizophrenia patients. Both, the findings of a shorter latency to duration and frequency deviants and the relationship of the latter with verbal memory in patients, emphasize the relevance of the temporal aspect of early auditory discrimination processing in schizophrenia.  相似文献   

15.
Eriksson J  Villa AE 《Bio Systems》2005,79(1-3):207-212
Evoked potentials were recorded from the auditory cortex of both freely moving and anesthetized rats when deviant sounds were presented in a homogenous series of standard sounds (oddball condition). A component of the evoked response to deviant sounds, the mismatch negativity (MMN), may underlie the ability to discriminate acoustic differences, a fundamental aspect of auditory perception. Whereas most MMN studies in animals have been done using simple sounds, this study involved a more complex set of sounds (synthesized vowels). The freely moving rats had previously undergone behavioral training in which they learned to respond differentially to these sounds. Although we found little evidence in this preparation for the typical, epidurally recorded, MMN response, a significant difference between deviant and standard evoked potentials was noted for the freely moving animals in the 100-200 ms range following stimulus onset. No such difference was found in the anesthetized animals.  相似文献   

16.
Detecting sudden environmental changes is crucial for the survival of humans and animals. In the human auditory system the mismatch negativity (MMN), a component of auditory evoked potentials (AEPs), reflects the violation of predictable stimulus regularities, established by the previous auditory sequence. Given the considerable potentiality of the MMN for clinical applications, establishing valid animal models that allow for detailed investigation of its neurophysiological mechanisms is important. Rodent studies, so far almost exclusively under anesthesia, have not provided decisive evidence whether an MMN analogue exists in rats. This may be due to several factors, including the effect of anesthesia. We therefore used epidural recordings in awake black hooded rats, from two auditory cortical areas in both hemispheres, and with bandpass filtered noise stimuli that were optimized in frequency and duration for eliciting MMN in rats. Using a classical oddball paradigm with frequency deviants, we detected mismatch responses at all four electrodes in primary and secondary auditory cortex, with morphological and functional properties similar to those known in humans, i.e., large amplitude biphasic differences that increased in amplitude with decreasing deviant probability. These mismatch responses significantly diminished in a control condition that removed the predictive context while controlling for presentation rate of the deviants. While our present study does not allow for disambiguating precisely the relative contribution of adaptation and prediction error processing to the observed mismatch responses, it demonstrates that MMN-like potentials can be obtained in awake and unrestrained rats.  相似文献   

17.
Auditory evoked potentials were recorded to onset and offset of synthesised instrumental tones in 40 normal subjects, 20 right-handed for writing and 20 left-handed. The majority of both groups showed a T-complex which was larger at the right temporal electrode (T4) than the left (T3). In the T4-T3 difference waveforms, the mean potential between latencies of 130 and 165 ms was negative in all right-handed subjects except two for whom the waveforms were marginally positive-going. Amongst the left-handers, however, this converse asymmetry was seen in 7 subjects, 5 of them more than 2 standard deviations from the mean of the right-handed group. The degree of asymmetry was not significantly correlated with the degree of left-handedness according to the Edinburgh Handedness Inventory. Asymmetry of the T-complex to instrumental tones appears to reflect the lateralisation of auditory `musical' processing in the temporal cortex, confirming evidence from other sources including PET that this is predominantly right-sided in the majority of individuals. The proportion of left-handers showing the converse laterality is roughly in accordance with those likely to be right-hemisphere-dominant for language. If linguistic and `musical' processes are consistently located in opposite hemispheres, AEPs to complex tones may prove a useful tool in establishing functional lateralisation.  相似文献   

18.

Background

Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset.

Methodology

Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz), while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz), such that the restored fundamental (also knows as “virtual pitch”) changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component.

Principal Findings

We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch) counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies.

Conclusions

Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.  相似文献   

19.
Normal maturation and functioning of the central auditory system affects the development of speech perception and oral language capabilities. This study examined maturation of central auditory pathways as reflected by age-related changes in the P1/N1 components of the auditory evoked potential (AEP). A synthesized consonant-vowel syllable (ba) was used to elicit cortical AEPs in 86 normal children ranging in age from 6 to 15 years and ten normal adults. Distinct age-related changes were observed in the morphology of the AEP waveform. The adult response consists of a prominent negativity (N1) at about 100 ms, preceded by a smaller P1 component at about 50 ms. In contrast, the child response is characterized by a large P1 response at about 100 ms. This wave decreases significantly in latency and amplitude up to about 20 years of age. In children, P1 is followed by a broad negativity at about 200 ms which we term N1b. Many subjects (especially older children) also show an earlier negativity (N1a). Both N1a and N1b latencies decrease significantly with age. Amplitudes of N1a and N1b do not show significant age-related changes. All children have the N1b; however, the frequency of occurrence of N1a increases with age. Data indicate that the child P1 develops systematically into the adult response; however, the relationship of N1a and N1b to the adult N1 is unclear. These results indicate that maturational changes in the central auditory system are complex and extend well into the second decade of life.  相似文献   

20.
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号