首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Spatio-temporal source modeling (STSM) of event-related potentials was used to estimate the loci and characteristics of cortical activity evoked by acoustic stimulation in normal hearing subjects and by electrical stimulation in cochlear implant (CI) subjects. In both groups of subjects, source solutions obtained for the N1/P2 complex were located in the superior half of the temporal lobe in the head model. Results indicate that it may be possible to determine whether stimulation of different implant channels activates different regions of cochleotopically organized auditory cortex. Auditory system activation can be assessed further by examining the characteristics of the source wave forms. For example, subjects whose cochlear implants provided auditory sensations and normal hearing subjects had similar source activity. In contrast, a subject in whom implant activation evoked eyelid movements exhibited different source wave forms. STSM analysis may provide an electrophysiological technique for guiding rehabilitation programs based on the capabilities of the individual implant user and for disentangling the complex response patterns to electrical stimulation of the brain.  相似文献   

2.
The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the cortical level in speech perception.  相似文献   

3.
Prelingually deafened children with cochlear implants stand a good chance of developing satisfactory speech performance. Nevertheless, their eventual language performance is highly variable and not fully explainable by the duration of deafness and hearing experience. In this study, two groups of cochlear implant users (CI groups) with very good basic hearing abilities but non-overlapping speech performance (very good or very bad speech performance) were matched according to hearing age and age at implantation. We assessed whether these CI groups differed with regard to their phoneme discrimination ability and auditory sensory memory capacity, as suggested by earlier studies. These functions were measured behaviorally and with the Mismatch Negativity (MMN). Phoneme discrimination ability was comparable in the CI group of good performers and matched healthy controls, which were both better than the bad performers. Source analyses revealed larger MMN activity (155–225 ms) in good than in bad performers, which was generated in the frontal cortex and positively correlated with measures of working memory. For the bad performers, this was followed by an increased activation of left temporal regions from 225 to 250 ms with a focus on the auditory cortex. These results indicate that the two CI groups developed different auditory speech processing strategies and stress the role of phonological functions of auditory sensory memory and the prefrontal cortex in positively developing speech perception and production.  相似文献   

4.
The auditory brainstem implant (ABI) does provide auditory sensations, recognition of environmental sounds and aid in spoken communication in about 300 patients worldwide. It is no more an investigative device but widely accepted for the treatment of patients who have lost hearing due to bilateral tumors of the hearing nerve who transmits the acoustic information from the cochlea to the brain. Most of the implanted patients are completely deaf when the implant is switched off. In contrast to cochlear implants, only few of the implanted patients achieve open-set speech recognition without the help of visual cues. On average, the ABI improves communicative functions like speech recognition at about 30% when compared to lip-reading only. The task for the next years is to improve the outcome of ABI further by developing new less invasive operative approaches as well as new hardware and software for the ABI device.  相似文献   

5.

Background

Visual cross-modal re-organization is a neurophysiological process that occurs in deafness. The intact sensory modality of vision recruits cortical areas from the deprived sensory modality of audition. Such compensatory plasticity is documented in deaf adults and animals, and is related to deficits in speech perception performance in cochlear-implanted adults. However, it is unclear whether visual cross-modal re-organization takes place in cochlear-implanted children and whether it may be a source of variability contributing to speech and language outcomes. Thus, the aim of this study was to determine if visual cross-modal re-organization occurs in cochlear-implanted children, and whether it is related to deficits in speech perception performance.

Methods

Visual evoked potentials (VEPs) were recorded via high-density EEG in 41 normal hearing children and 14 cochlear-implanted children, aged 5–15 years, in response to apparent motion and form change. Comparisons of VEP amplitude and latency, as well as source localization results, were conducted between the groups in order to view evidence of visual cross-modal re-organization. Finally, speech perception in background noise performance was correlated to the visual response in the implanted children.

Results

Distinct VEP morphological patterns were observed in both the normal hearing and cochlear-implanted children. However, the cochlear-implanted children demonstrated larger VEP amplitudes and earlier latency, concurrent with activation of right temporal cortex including auditory regions, suggestive of visual cross-modal re-organization. The VEP N1 latency was negatively related to speech perception in background noise for children with cochlear implants.

Conclusion

Our results are among the first to describe cross modal re-organization of auditory cortex by the visual modality in deaf children fitted with cochlear implants. Our findings suggest that, as a group, children with cochlear implants show evidence of visual cross-modal recruitment, which may be a contributing source of variability in speech perception outcomes with their implant.  相似文献   

6.
A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.  相似文献   

7.
Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations.  相似文献   

8.
In many countries, a single cochlear implant is offered as a treatment for a bilateral hearing loss. In cases where there is asymmetry in the amount of sound deprivation between the ears, there is a dilemma in choosing which ear should be implanted. In many clinics, the choice of ear has been guided by an assumption that the reorganisation of the auditory pathways caused by longer duration of deafness in one ear is associated with poorer implantation outcomes for that ear. This assumption, however, is mainly derived from studies of early childhood deafness. This study compared outcomes following implantation of the better or poorer ear in cases of long-term hearing asymmetries. Audiological records of 146 adults with bilateral hearing loss using a single hearing aid were reviewed. The unaided ear had 15 to 72 years of unaided severe to profound hearing loss before unilateral cochlear implantation. 98 received the implant in their long-term sound-deprived ear. A multiple regression analysis was conducted to assess the relative contribution of potential predictors to speech recognition performance after implantation. Duration of bilateral significant hearing loss and the presence of a prelingual hearing loss explained the majority of variance in speech recognition performance following cochlear implantation. For participants with postlingual hearing loss, similar outcomes were obtained by implanting either ear. With prelingual hearing loss, poorer outcomes were obtained when implanting the long-term sound-deprived ear, but the duration of the sound deprivation in the implanted ear did not reliably predict outcomes. Contrary to an apparent clinical consensus, duration of sound deprivation in one ear has limited value in predicting speech recognition outcomes of cochlear implantation in that ear. Outcomes of cochlear implantation are more closely related to the period of time for which the brain is deprived of auditory stimulation from both ears.  相似文献   

9.
Amedi A  Malach R  Pascual-Leone A 《Neuron》2005,48(5):859-872
Recent studies emphasize the overlap between the neural substrates of visual perception and visual imagery. However, the subjective experiences of imagining and seeing are clearly different. Here we demonstrate that deactivation of auditory cortex (and to some extent of somatosensory and subcortical visual structures) as measured by BOLD functional magnetic resonance imaging unequivocally differentiates visual imagery from visual perception. During visual imagery, auditory cortex deactivation negatively correlates with activation in visual cortex and with the score in the subjective vividness of visual imagery questionnaire (VVIQ). Perception of the world requires the merging of multisensory information so that, during seeing, information from other sensory systems modifies visual cortical activity and shapes experience. We suggest that pure visual imagery corresponds to the isolated activation of visual cortical areas with concurrent deactivation of "irrelevant" sensory processing that could disrupt the image created by our "mind's eye."  相似文献   

10.
It is known from the literature that (1) sounds with complex spectral composition are assessed by summing the partial outputs of the spectral channels; (2) electrical stimuli used in cochlear implant systems bring about the perception of a frequency band; and (3) removal of different parts of the auditory spectrum significantly affects phrase intelligibility. The level of acoustic pressure (AP) at a comfortable loudness level and the phrase intelligibility after comb filtering of a speech signal were measured in normally hearing subjects. Using a software program for spectral transformation of the speech signal, the phrase spectrum was divided into frequency bands of various width and only the bands with odd numbers were summed. In three series, the width of odd bands was 50, 100, or 150 Hz and the width of even bands was varied. The filter period was equal to the sum of the even and odd bands. With the same period, the acoustic pressure of the output signal should be increased to reach the comfortable loudness level of a speech signal passed via the comb filter; the narrower the width of the test bands, the higher the AP increase. With the same width of the test band, the acoustic pressure of the output signal should be increased to reach the comfortable loudness level; the greater the filter period, the higher the increase should be. The speech signal redundancy with respect to its spectral content can be equal to or even exceed 97.5%.  相似文献   

11.
Ongoing clinical studies on patients recently implanted with the auditory midbrain implant (AMI) into the inferior colliculus (IC) for hearing restoration have shown that these patients do not achieve performance levels comparable to cochlear implant patients. The AMI consists of a single-shank array (20 electrodes) for stimulation along the tonotopic axis of the IC. Recent findings suggest that one major limitation in AMI performance is the inability to sufficiently activate neurons across the three-dimensional (3-D) IC. Unfortunately, there are no currently available 3-D array technologies that can be used for clinical applications. More recently, there has been a new initiative by the European Commission to fund and develop 3-D chronic electrode arrays for science and clinical applications through the NeuroProbes project that can overcome the bulkiness and limited 3-D configurations of currently available array technologies. As part of the NeuroProbes initiative, we investigated whether their new array technology could be potentially used for future AMI patients. Since the NeuroProbes technology had not yet been tested for electrical stimulation in an in vivo animal preparation, we performed experiments in ketamine-anesthetized guinea pigs in which we inserted and stimulated a NeuroProbes array within the IC and recorded the corresponding neural activation within the auditory cortex. We used 2-D arrays for this initial feasibility study since they were already available and were sufficient to access the IC and also demonstrate effective activation of the central auditory system. Based on these encouraging results and the ability to develop customized 3-D arrays with the NeuroProbes technology, we can further investigate different stimulation patterns across the ICC to improve AMI performance.  相似文献   

12.
目的:探讨单侧人工耳蜗植入(cochlear implantation,CI)对学龄前耳聋儿童听觉语言康复的治疗效果以及相关影响因素。方法:将我院自2017年1月至2017年12月行CI治疗的学龄前儿童72例行作为研究对象,通过问卷调查手术患儿的相关资料,对可能影响患儿听觉言语康复效果的因素和听觉行为分级(Categories of auditory performance,CAP)以及言语可懂程度分级(Speech intelligibility rating,SIR)结果进行二分类变量的单因素分析,再进行多分类变量的Logistic回归分析评估患儿的治疗效果和影响康复效果的因素。结果:耳聋患儿CI植入年龄、术前平均残余听力、术前佩戴助听器时间、使用人工耳蜗时间和术后语训时间等因素和CAP增长倍数之间有明显的相关性(P0.05),除了上述因素之外还有术前语训时间等因素与治疗后患儿SIR增长倍数存在相关性(P0.05);CI植入年龄、术前平均残余听力和术前佩戴助听器时间对患儿术后CAP的恢复具有影响(P0.05);CI植入年龄、术前佩戴助听器时间、术前语训时间等因素对患儿SIR恢复产生影响(P0.05)。结论:患儿植入人工耳蜗的年龄、术前平均残余听力、术前佩戴助听器时间和术前言语训练时间是影响学龄前耳聋患儿术后听力言语功能恢复的主要因素。  相似文献   

13.

Background

Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores.

Methods and Findings

In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05).

Conclusion

Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.  相似文献   

14.
It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.  相似文献   

15.
Anatomical studies propose that the primate auditory cortex contains more fields than have actually been functionally confirmed or described. Spatially resolved functional magnetic resonance imaging (fMRI) with carefully designed acoustical stimulation could be ideally suited to extend our understanding of the processing within these fields. However, after numerous experiments in humans, many auditory fields remain poorly characterized. Imaging the macaque monkey is of particular interest as these species have a richer set of anatomical and neurophysiological data to clarify the source of the imaged activity. We functionally mapped the auditory cortex of behaving and of anesthetized macaque monkeys with high resolution fMRI. By optimizing our imaging and stimulation procedures, we obtained robust activity throughout auditory cortex using tonal and band-passed noise sounds. Then, by varying the frequency content of the sounds, spatially specific activity patterns were observed over this region. As a result, the activity patterns could be assigned to many auditory cortical fields, including those whose functional properties were previously undescribed. The results provide an extensive functional tessellation of the macaque auditory cortex and suggest that 11 fields contain neurons tuned for the frequency of sounds. This study provides functional support for a model where three fields in primary auditory cortex are surrounded by eight neighboring “belt” fields in non-primary auditory cortex. The findings can now guide neurophysiological recordings in the monkey to expand our understanding of the processing within these fields. Additionally, this work will improve fMRI investigations of the human auditory cortex.  相似文献   

16.
Chronic tinnitus, the continuous perception of a phantom sound, is a highly prevalent audiological symptom. A promising approach for the treatment of tinnitus is repetitive transcranial magnetic stimulation (rTMS) as this directly affects tinnitus-related brain activity. Several studies indeed show tinnitus relief after rTMS, however effects are moderate and vary strongly across patients. This may be due to a lack of knowledge regarding how rTMS affects oscillatory activity in tinnitus sufferers and which modulations are associated with tinnitus relief. In the present study we examined the effects of five different stimulation protocols (including sham) by measuring tinnitus loudness and tinnitus-related brain activity with Magnetoencephalography before and after rTMS. Changes in oscillatory activity were analysed for the stimulated auditory cortex as well as for the entire brain regarding certain frequency bands of interest (delta, theta, alpha, gamma). In line with the literature the effects of rTMS on tinnitus loudness varied strongly across patients. This variability was also reflected in the rTMS effects on oscillatory activity. Importantly, strong reductions in tinnitus loudness were associated with increases in alpha power in the stimulated auditory cortex, while an unspecific decrease in gamma and alpha power, particularly in left frontal regions, was linked to an increase in tinnitus loudness. The identification of alpha power increase as main correlate for tinnitus reduction sheds further light on the pathophysiology of tinnitus. This will hopefully stimulate the development of more effective therapy approaches.  相似文献   

17.
When we speak, we provide ourselves with auditory speech input. Efficient monitoring of speech is often hypothesized to depend on matching the predicted sensory consequences from internal motor commands (forward model) with actual sensory feedback. In this paper we tested the forward model hypothesis using functional Magnetic Resonance Imaging. We administered an overt picture naming task in which we parametrically reduced the quality of verbal feedback by noise masking. Presentation of the same auditory input in the absence of overt speech served as listening control condition. Our results suggest that a match between predicted and actual sensory feedback results in inhibition of cancellation of auditory activity because speaking with normal unmasked feedback reduced activity in the auditory cortex compared to listening control conditions. Moreover, during self-generated speech, activation in auditory cortex increased as the feedback quality of the self-generated speech decreased. We conclude that during speaking early auditory cortex is involved in matching external signals with an internally generated model or prediction of sensory consequences, the locus of which may reside in auditory or higher order brain areas. Matching at early auditory cortex may provide a very sensitive monitoring mechanism that highlights speech production errors at very early levels of processing and may efficiently determine the self-agency of speech input.  相似文献   

18.
The mechanisms of selective verbal attention were studied under conditions of simultaneous delivery of speech signals via the visual and auditory channels. The investigation was based on the comparison and synthesis of data obtained by two methods: positron emission tomography (PET) and brain evoked potentials (EPs). A new approach was developed: complementary tasks were constructed in such a way that, despite principal methodological problems, the same phenomenon could be investigated in one paradigm in EP and PET studies. The results obtained by the two methods are in rather good agreement with respect to topography: the secondary and tertiary areas, as well as the associative brain areas, are involved in attention concentration, that is, selection of verbal information occurs at the level of cognitive processes. The combination of two complementary methods, PET and EP, allowed the processes of processing of sensory information and brain mechanisms of selective attention to be investigated much more completely. The PET studies contributed to further understanding of brain mechanisms evidencing where processing occurs and the EP method provided insight into the mechanism of how this information is processed inside the corresponding cortical areas. The finding that the activation of primary areas of the visual cortex is accompanied by the inhibition of visual information deserves attention. This conclusion can be considered highly significant because of the concordance of the two independent methods. How to interpret it is not yet clear. It is possible that, in the case of primary importance of verbal information and priority of the visual channel for the repression from consciousness of artificially irrelevant information, a safety mechanism is activated: the amplified signal enters the brain cortex, where it is retained in the short-term iconic memory. This enables a reaction to this stimulus (if necessary), in the presence of any additional sign involving selective attention.  相似文献   

19.
The processing of continuous and complex auditory signals such as speech relies on the ability to use statistical cues (e.g. transitional probabilities). In this study, participants heard short auditory sequences composed either of Italian syllables or bird songs and completed a regularity-rating task. Behaviorally, participants were better at differentiating between levels of regularity in the syllable sequences than in the bird song sequences. Inter-individual differences in sensitivity to regularity for speech stimuli were correlated with variations in surface-based cortical thickness (CT). These correlations were found in several cortical areas including regions previously associated with statistical structure processing (e.g. bilateral superior temporal sulcus, left precentral sulcus and inferior frontal gyrus), as well other regions (e.g. left insula, bilateral superior frontal gyrus/sulcus and supramarginal gyrus). In all regions, this correlation was positive suggesting that thicker cortex is related to higher sensitivity to variations in the statistical structure of auditory sequences. Overall, these results suggest that inter-individual differences in CT within a distributed network of cortical regions involved in statistical structure processing, attention and memory is predictive of the ability to detect structural structure in auditory speech sequences.  相似文献   

20.

Background

Brain-machine interfaces (BMIs) involving electrodes implanted into the human cerebral cortex have recently been developed in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study we use a novel approach to speech restoration in which we decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech.

Methodology/Principal Findings

Neural signals recorded by a Neurotrophic Electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome, characterized by near-total paralysis with spared cognition, were transmitted wirelessly across the scalp and used to drive a speech synthesizer. A Kalman filter-based decoder translated the neural signals generated during attempted speech into continuous parameters for controlling a synthesizer that provided immediate (within 50 ms) auditory feedback of the decoded sound. Accuracy of the volunteer''s vowel productions with the synthesizer improved quickly with practice, with a 25% improvement in average hit rate (from 45% to 70%) and 46% decrease in average endpoint error from the first to the last block of a three-vowel task.

Conclusions/Significance

Our results support the feasibility of neural prostheses that may have the potential to provide near-conversational synthetic speech output for individuals with severely impaired speech motor control. They also provide an initial glimpse into the functional properties of neurons in speech motor cortical areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号