首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 74: 972–986, 2014  相似文献   

2.
We studied auditory and visual evoked potentials in D.W., a patient with congenital stenosis of the cerebral aqueduct. Head CT scans revealed marked hydrocephalus with expanded ventricles filling more than 80% of the cranium and compressing brain tissue to less than 1 cm in thickness. Despite the striking neuroanatomical abnormalities, however, the patient functioned well in daily life and was attending a local community college at the time of testing.Evoked potentials provided evidence of preserved sensory processing at cortical levels. Pattern reversal visual evoked potentials had normal latencies and amplitudes. Brain-stem auditory evoked potentials (BAEPs) showed normal wave V latencies. Na and Pa components of middle-latency AEP had normal amplitudes and latencies at the vertex, although amplitudes at lateral electrodes were larger than at the midline.In contrast to the normal sensory responses, long-latency auditory evoked potentials to standard and target tones showed abnormal P3 components. Standard tones (probability 85%), evoked NN1 components with normal amplitudes (−3.7 μV) and latencies (103 msec), but also elicited large P3 components (17 μV, latency 305 msec) that were never observed following frequent stimuli in control subjects. Target stimuli (probability 15%) elicited P3s in D.W. and controls, but P3 amplitudes were enhanced in D.W. (to more than 40 μV) and the P3 showed an unusual, frontal distribution. The results are consistent with a subcortical sources of the P300. Moreover, they suggest that the substitution of controlled for automatic processes may help high-functioning hydrocephalics compensate for abnormalities in cerebral structure.  相似文献   

3.

Background

Visual cross-modal re-organization is a neurophysiological process that occurs in deafness. The intact sensory modality of vision recruits cortical areas from the deprived sensory modality of audition. Such compensatory plasticity is documented in deaf adults and animals, and is related to deficits in speech perception performance in cochlear-implanted adults. However, it is unclear whether visual cross-modal re-organization takes place in cochlear-implanted children and whether it may be a source of variability contributing to speech and language outcomes. Thus, the aim of this study was to determine if visual cross-modal re-organization occurs in cochlear-implanted children, and whether it is related to deficits in speech perception performance.

Methods

Visual evoked potentials (VEPs) were recorded via high-density EEG in 41 normal hearing children and 14 cochlear-implanted children, aged 5–15 years, in response to apparent motion and form change. Comparisons of VEP amplitude and latency, as well as source localization results, were conducted between the groups in order to view evidence of visual cross-modal re-organization. Finally, speech perception in background noise performance was correlated to the visual response in the implanted children.

Results

Distinct VEP morphological patterns were observed in both the normal hearing and cochlear-implanted children. However, the cochlear-implanted children demonstrated larger VEP amplitudes and earlier latency, concurrent with activation of right temporal cortex including auditory regions, suggestive of visual cross-modal re-organization. The VEP N1 latency was negatively related to speech perception in background noise for children with cochlear implants.

Conclusion

Our results are among the first to describe cross modal re-organization of auditory cortex by the visual modality in deaf children fitted with cochlear implants. Our findings suggest that, as a group, children with cochlear implants show evidence of visual cross-modal recruitment, which may be a contributing source of variability in speech perception outcomes with their implant.  相似文献   

4.
We report 3 children without any brainstem auditory evoked potential (BAEP) neural component who all retained isolated cochlear microphonic potentials as well as click-evoked otoacoustic emissions. Two of them demonstrated only moderately impaired audiometric thresholds. These features correspond to a peculiar pattern of auditory dysfunction recently coined `auditory neuropathy'. In contrast with the published previous cases of auditory neuropathy presenting with an acquired hearing deficit as children or young adults, all 3 children had a history of major neonatal illness and the auditory neuropathy was already demonstrated in the first months of their lives.  相似文献   

5.

Background

Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

Methodology/Principal Findings

In the present study, we manipulated auditory feedback during speech production in a group of 9–11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

Conclusions

The results indicate that 9–11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children''s perceptual representations of speech sound categories.  相似文献   

6.
采用多尺度小波变换计算脑干听觉诱发电位近似熵的方法,对比婴儿痉挛症患儿与正常幼儿的近似熵值,按照脑干听觉诱发电位成份波对应的解剖位置,分段、分尺度计算并统计近似熵值,从神经信息传递角度探讨阻碍婴儿痉挛症患儿智能发育的原因。采集12例正常儿童和13例婴儿痉挛症患儿的脑干听觉诱发电位,将它们进行60尺度小波分解,分段、分尺度计算各尺度近似熵值。发现婴儿痉挛症组患儿脑干听觉诱发电位中代表脑干活动的3~7ms段的分尺度近似熵明显高于正常组(P<0.01),小尺度上表现尤为显著。结果表明婴儿痉挛症患儿脑干传导通路不畅通,其中的随机成份增多,阻碍信息在脑干的传递,进而影响患儿大脑皮层的发育。  相似文献   

7.
Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT) has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.  相似文献   

8.
Perception and discrimination of auditory and speech stimuli in children aged 7-9 years with either receptive (n=6) or expressive (n=5) type of special language impairment and 7 healthy age-matched controls was investigated using evoked potential technique. The measurements were performed with a 32-channel Neuroscan electroencephalographic system. Two types of stimuli were applied, pure tones (1 kHz and 2 kHz) and double syllabi consisting of one consonant and one vocal characteristic of Croatian language. The stimuli were presented in an oddball paradigm, requiring a conscious reaction for the subjects. Latencies and amplitudes of P1, N1, P2, N2, P3, N4, and SW waves were analized, as well as the reaction time and number of responses. There were found no statistically significant difference between children with special language impairment and the control group in average response time and number of responses to tone burst or double syllable. Analysis of variance of all used variables showed a statistically significant difference in P3 and Sw wave latencies after double syllable stimulation, P3 and N4 waves latencies after target stimulation, P2 and Sw wave amplitude; and in N1 wave amplitude after pure tone stimulation. Our study showed that children with speech and language disorder take longer time to perceive and discriminate between either tonal or speech auditory stimuli than children with typical speech and language development.  相似文献   

9.
Evoked potential audiometry and brain-stem auditory evoked potentials were evaluated in 15 patients with systemic brucellosis in whom brucella meningitis was suspected clinically. In 8 patients cerebrospinal fluid (CSF) was abnormal with high brucella titre, and evoked potentials were abnormal in all of them. In 7 patients the CSF was normal and evoked potentials were also normal. Brain-stem auditory evoked potential abnormalities were categorised into 4 types: (1) abnormal wave I, (2) abnormal wave V, both irreversible, (3) prolonged I–III interpeak latencies, and (4) prolonged I–V interpeak latencies, both reversible. These findings are of important diagnostic value and correlate well with the clinical features, aetiopathogenesis and final outcome.  相似文献   

10.
Wang XD  Gu F  He K  Chen LH  Chen L 《PloS one》2012,7(1):e30027

Background

Extraction of linguistically relevant auditory features is critical for speech comprehension in complex auditory environments, in which the relationships between acoustic stimuli are often abstract and constant while the stimuli per se are varying. These relationships are referred to as the abstract auditory rule in speech and have been investigated for their underlying neural mechanisms at an attentive stage. However, the issue of whether or not there is a sensory intelligence that enables one to automatically encode abstract auditory rules in speech at a preattentive stage has not yet been thoroughly addressed.

Methodology/Principal Findings

We chose Chinese lexical tones for the current study because they help to define word meaning and hence facilitate the fabrication of an abstract auditory rule in a speech sound stream. We continuously presented native Chinese speakers with Chinese vowels differing in formant, intensity, and level of pitch to construct a complex and varying auditory stream. In this stream, most of the sounds shared flat lexical tones to form an embedded abstract auditory rule. Occasionally the rule was randomly violated by those with a rising or falling lexical tone. The results showed that the violation of the abstract auditory rule of lexical tones evoked a robust preattentive auditory response, as revealed by whole-head electrical recordings of the mismatch negativity (MMN), though none of the subjects acquired explicit knowledge of the rule or became aware of the violation.

Conclusions/Significance

Our results demonstrate that there is an auditory sensory intelligence in the perception of Chinese lexical tones. The existence of this intelligence suggests that the humans can automatically extract abstract auditory rules in speech at a preattentive stage to ensure speech communication in complex and noisy auditory environments without drawing on conscious resources.  相似文献   

11.
Field potentials have been recorded in the torus semicircularis of the toad, Bufo marinus, in response to brief tones presented in the free field. The amplitude of the potentials varied with the frequency of the stimulus and location of the electrode along the rostro-caudal axis of the torus. All frequencies in the auditory range evoked largest potentials when the stimulus was located in the contralateral auditory field. Potentials evoked by low to mid frequencies were largest when the stimulus was located near the line orthogonal to the long axis of the animal. For progressively higher frequencies, the optimal stimulus position was progressively more anterior in the contralateral field. In animals in which one eighth nerve had been sectioned, field potentials evoked by tones of low to mid frequency were less sensitive to changes in stimulus direction than in normal animals. However, the directional sensitivity of field potentials evoked by mid to high frequencies was similar in monaural and normal animals. These observations suggest that binaural neural integration is important in determining the directional sensitivity of field potentials in the torus evoked by low to mid frequencies but not for potentials evoked by mid to high frequencies.  相似文献   

12.
Normal maturation and functioning of the central auditory system affects the development of speech perception and oral language capabilities. This study examined maturation of central auditory pathways as reflected by age-related changes in the P1/N1 components of the auditory evoked potential (AEP). A synthesized consonant-vowel syllable (ba) was used to elicit cortical AEPs in 86 normal children ranging in age from 6 to 15 years and ten normal adults. Distinct age-related changes were observed in the morphology of the AEP waveform. The adult response consists of a prominent negativity (N1) at about 100 ms, preceded by a smaller P1 component at about 50 ms. In contrast, the child response is characterized by a large P1 response at about 100 ms. This wave decreases significantly in latency and amplitude up to about 20 years of age. In children, P1 is followed by a broad negativity at about 200 ms which we term N1b. Many subjects (especially older children) also show an earlier negativity (N1a). Both N1a and N1b latencies decrease significantly with age. Amplitudes of N1a and N1b do not show significant age-related changes. All children have the N1b; however, the frequency of occurrence of N1a increases with age. Data indicate that the child P1 develops systematically into the adult response; however, the relationship of N1a and N1b to the adult N1 is unclear. These results indicate that maturational changes in the central auditory system are complex and extend well into the second decade of life.  相似文献   

13.
Short-, middle- and long-latency auditory evoked potentials (SAEPs, MAEPs and LAEPs) were examined in 12 subjects with Down's syndrome and in 12 age-matched normal subjects. In comparison with the normal subjects, Down subjects showed shorter latencies for SAEP peaks II, III, IV and V (and correspondingly shorter interpeak intervals I–II and I–III) so long as stimulus intensity was at least 45 dB SL. The MAEP peak Na had a longer latency in Down subjects than in normal subjects, but not the Pa latency. In passive oddball experiments for LAEPs, the latencies of all components from N1 to P3 were progressively longer in Down subjects, and the N2-P3 amplitude increased slightly between the first and fourth blocks of stimuli (whereas in the normal subjects it decreased). These alterations in auditory evoked potentials, which may correlate with cerebral alterations in organization and responsiveness responsible for deficient information processing, may constitute an electrophysiological pattern that is characteristic of Down's syndrome.  相似文献   

14.
We studied auditory short-latency brainstem and long-latency cortical evoked potentials (EP)in 62 healthy children and 126 children with spastic forms of childrens cerebral palsy, CP (spastic tetraparesis, spastic diplegia, and left-and right-side hemiplegias). An increase in the thresholds of audibility (independently of the CP form) was the most typical disturbance of the function of hearing revealed by the analysis of EP recorded in children suffering from CP. Disturbances in transmission of afferent impulsation in the brain-stem structures of the auditory system and disorders in the perception of different tones within the speech frequency range were also rather frequent. Modifications of the brainstem and cortical auditory EP typical of different CP forms, in particular hemiplegias, are described. It is demonstrated that recording and analysis of EP allow one to diagnose in children with CP those disorders in the hearing function that, in many cases, are of a subclinical nature. This technique allows clinicians to examine the youngest children (when verbal contact with the child is difficult or impossible), to study brainstem EP, and to obtain more objective data; these are significant advantages, as compared with subjective audiometry.Neirofiziologiya/Neurophysiology, Vol.36, No.4, pp.306–312, July-August, 2004.  相似文献   

15.
Dynamic time warping is a procedure whereby portions of a temporal sequence of values are stretched or shrunk to make it similar to another sequence. This procedure can be used to align the brain-stem auditory evoked potentials recorded from different subjects prior to averaging. The resultant warp-average more closely resembles the wave form of a typical subject than the conventional average. Dynamic time warping can also be used to compare one brain-stem auditory evoked potential to another. This comparison can show the differences that result from changes in a stimulus parameter such as intensity or repetition rate. When a patient's wave form is compared to a normal template, warping can identify the peaks in the patient's wave form that correspond most closely to the peaks in the normal template. Compared to an experienced human interpreter, warping is very accurate in identifying the waves of normal brain-stem auditory evoked potentials (error rate between 0 and 4%) and reasonably accurate in identifying the peaks in abnormal wave forms (error rate between 3 and 18%).  相似文献   

16.
The aims of this study were (1) to document the recognition performance of environmental sounds (ESs) in Mandarin-speaking children with cochlear implants (CIs) and to analyze the possible associated factors with the ESs recognition; (2) to examine the relationship between perception of ESs and receptive vocabulary level; and (3) to explore the acoustic factors relevant to perceptual outcomes of daily ESs in pediatric CI users. Forty-seven prelingually deafened children between ages 4 to 10 years participated in this study. They were divided into pre-school (group A: age 4–6) and school-age (group B: age 7 to 10) groups. Sound Effects Recognition Test (SERT) and the Chinese version of the revised Peabody Picture Vocabulary Test (PPVT-R) were used to assess the auditory perception ability. The average correct percentage of SERT was 61.2% in the preschool group and 72.3% in the older group. There was no significant difference between the two groups. The ESs recognition performance of children with CIs was poorer than that of their hearing peers (90% in average). No correlation existed between ESs recognition and receptive vocabulary comprehension. Two predictive factors: pre-implantation residual hearing and duration of CI usage were found to be associated with recognition performance of daily-encountered ESs. Acoustically, sounds with distinct temporal patterning were easier to identify for children with CIs. In conclusion, we have demonstrated that ESs recognition is not easy for children with CIs and a low correlation existed between linguistic sounds and ESs recognition in these subjects. Recognition ability of ESs in children with CIs can only be achieved by natural exposure to daily-encountered auditory stimuli if sounds other than speech stimuli were less emphasized in routine verbal/oral habilitation program. Therefore, task-specific measures other than speech materials can be helpful to capture the full profile of auditory perceptual progress after implantation.  相似文献   

17.
Improving language and literacy is a matter of time   总被引:4,自引:0,他引:4  
Developmental deficits that affect speech perception increase the risk of language and literacy problems, which can lead to lowered academic and occupational accomplishment. Normal development and disorders of speech perception have both been linked to temporospectral auditory processing speed. Understanding the role of dynamic auditory processing in speech perception and language comprehension has led to the development of neuroplasticity-based intervention strategies aimed at ameliorating language and literacy problems and their sequelae.  相似文献   

18.
诱发电位的提取通常依靠相干平均方法,需要进行多次的重复刺激,实验时间较长。随着实验时间的增加,受试者生理因素及环境因素的变化,会影响诱发电位的正常形态(波形、强度和相位)。利用独立分量分析和小波变换方法,通过时域信息和空域信息的综合应用,可成功提取到听觉诱发电位晚成分的强度在实验过程中的变化,对由于实验时间增加对晚成分的影响做出定量评价。结果表明,在10min左右的实验过程中,听觉诱发电位晚成分的幅度会下降约40%。  相似文献   

19.
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.  相似文献   

20.
This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N = 120) discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号