首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
BackgroundAuditory neuropathy (AN) is a recently recognized hearing disorder characterized by intact outer hair cell function, disrupted auditory nerve synchronization and poor speech perception and recognition. Cochlear implants (CIs) are currently the most promising intervention for improving hearing and speech in individuals with AN. Although previous studies have shown optimistic results, there was large variability concerning benefits of CIs among individuals with AN. The data indicate that different criteria are needed to evaluate the benefit of CIs in these children compared to those with sensorineural hearing loss. We hypothesized that a hierarchic assessment would be more appropriate to evaluate the benefits of cochlear implantation in AN individuals.MethodsEight prelingual children with AN who received unilateral CIs were included in this study. Hearing sensitivity and speech recognition were evaluated pre- and postoperatively within each subject. The efficacy of cochlear implantation was assessed using a stepwise hierarchic evaluation for achieving: (1) effective audibility, (2) improved speech recognition, (3) effective speech, and (4) effective communication.ResultsThe postoperative hearing and speech performance varied among the subjects. According to the hierarchic assessment, all eight subjects approached the primary level of effective audibility, with an average implanted hearing threshold of 43.8 ± 10.2 dB HL. Five subjects (62.5%) attained the level of improved speech recognition, one (12.5%) reached the level of effective speech, and none of the subjects (0.0%) achieved effective communication.ConclusionCIs benefit prelingual children with AN to varying extents. A hierarchic evaluation provides a more suitable method to determine the benefits that AN individuals will likely receive from cochlear implantation.  相似文献   

2.
Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.  相似文献   

3.

Objective

To examine the direct and indirect effects of demographical factors on speech perception and vocabulary outcomes of Mandarin-speaking children with cochlear implants (CIs).

Methods

115 participants implanted before the age of 5 and who had used CI before 1 to 3 years were evaluated using a battery of speech perception and vocabulary tests. Structural equation modeling was used to test the hypotheses proposed.

Results

Early implantation significantly contributed to speech perception outcomes while having undergone a hearing aid trial (HAT) before implantation, maternal educational level (MEL), and having undergone universal newborn hearing screening (UNHS) before implantation had indirect effects on speech perception outcomes via their effects on age at implantation. In addition, both age at implantation and MEL had direct and indirect effects on vocabulary skills, while UNHS and HAT had indirect effects on vocabulary outcomes via their effects on age at implantation.

Conclusion

A number of factors had indirect and direct effects on speech perception and vocabulary outcomes in Mandarin-speaking children with CIs and these factors were not necessarily identical to those reported among their English-speaking counterparts.  相似文献   

4.
Temporal summation was estimated by measuring the detection thresholds for pulses with durations of 1–50 ms in the presence of noise maskers. The purpose of the study was to examine the effects of the spectral profiles and intensities of noise maskers on temporal summation, to investigate the appearance of signs of peripheral processing of pulses with various frequency-time structures in auditory responses, and to test the opportunity to use temporal summation for speech recognition. The central frequencies of pulses and maskers were similar. The maskers had ripple structures of the amplitude spectra of two types. In some maskers, the central frequencies coincided with the spectrum humps, whereas in other maskers, they coincided with spectrum dip (so-called on- and off-maskers). When the auditory system differentiated the masker humps, then the difference between the thresholds of recognition of the stimuli presented together with each of two types of maskers was not equal to zero. The assessment of temporal summation and the difference of the thresholds of pulse recognition under conditions of the presentation of the on- and off-maskers allowed us to make a conclusion on auditory sensitivity and the resolution of the spectral structure of maskers or frequency selectivity during presentation of pulses of various durations in local frequency areas. In order to estimate the effect of the dynamic properties of hearing on sensitivity and frequency selectivity, we changed the intensity of maskers. We measured temporal summation under the conditions of the presentation of on- and off-maskers of various intensities in two frequency ranges (2 and 4 kHz) in four subjects with normal hearing and one person with age-related hearing impairments who complained of a decrease in speech recognition under noise conditions. Pulses shorter than 10 ms were considered as simple models of consonant sounds, whereas tone pulses longer than 10 ms were considered as simple models of vowel sounds. In subjects with normal hearing in the range of moderate masker intensities, we observed an enhancement of temporal summation when the short pulses or consonant sounds were presented and an improvement of the resolution of the broken structure of masker spectra when the short and tone pulses, i.e., consonant and vowel sounds, were presented. We supposed that the enhancement of the summation was related to the refractoriness of the fibers of the auditory nerve. In the range of 4 kHz, the subject with age-related hearing impairments did not recognize the ripple structure of the maskers in the presence of the short pulses or consonant sounds. We supposed that these impairments were caused by abnormal synchronization of the responses of the auditory nerve fibers induced by the pulses, and this resulted in a decrease in speech recognition.  相似文献   

5.
We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogram representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus and cortex, and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds.  相似文献   

6.
Differences in auditory perception between species are influenced by phylogenetic origin and the perceptual challenges imposed by the natural environment, such as detecting prey- or predator-generated sounds and communication signals. Bats are well suited for comparative studies on auditory perception since they predominantly rely on echolocation to perceive the world, while their social calls and most environmental sounds have low frequencies. We tested if hearing sensitivity and stimulus level coding in bats differ between high and low-frequency ranges by measuring auditory brainstem responses (ABRs) of 86 bats belonging to 11 species. In most species, auditory sensitivity was equally good at both high- and low-frequency ranges, while amplitude was more finely coded for higher frequency ranges. Additionally, we conducted a phylogenetic comparative analysis by combining our ABR data with published data on 27 species. Species-specific peaks in hearing sensitivity correlated with peak frequencies of echolocation calls and pup isolation calls, suggesting that changes in hearing sensitivity evolved in response to frequency changes of echolocation and social calls. Overall, our study provides the most comprehensive comparative assessment of bat hearing capacities to date and highlights the evolutionary pressures acting on their sensory perception.  相似文献   

7.
The present article outlines the contribution of the mismatch negativity (MMN), and its magnetic equivalent MMNm, to our understanding of the perception of speech sounds in the human brain. MMN data indicate that each sound, both speech and non-speech, develops its neural representation corresponding to the percept of this sound in the neurophysiological substrate of auditory sensory memory. The accuracy of this representation, determining the accuracy of the discrimination between different sounds, can be probed with MMN separately for any auditory feature or stimulus type such as phonemes. Furthermore, MMN data show that the perception of phonemes, and probably also of larger linguistic units (syllables and words), is based on language-specific phonetic traces developed in the posterior part of the left-hemisphere auditory cortex. These traces serve as recognition models for the corresponding speech sounds in listening to speech.  相似文献   

8.
Several acoustic cues contribute to auditory distance estimation. Nonacoustic cues, including familiarity, may also play a role. We tested participants' ability to distinguish the distances of acoustically similar sounds that differed in familiarity. Participants were better able to judge the distances of familiar sounds. Electroencephalographic (EEG) recordings collected while participants performed this auditory distance judgment task revealed that several cortical regions responded in different ways depending on sound familiarity. Surprisingly, these differences were observed in auditory cortical regions as well as other cortical regions distributed throughout both hemispheres. These data suggest that learning about subtle, distance-dependent variations in complex speech sounds involves processing in a broad cortical network that contributes both to speech recognition and to how spatial information is extracted from speech.  相似文献   

9.
Humans can recognize spoken words with unmatched speed and accuracy. Hearing the initial portion of a word such as "formu…" is sufficient for the brain to identify "formula" from the thousands of other words that partially match. Two alternative computational accounts propose that partially matching words (1) inhibit each other until a single word is selected ("formula" inhibits "formal" by lexical competition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error is minimal after sequences like "formu…"). To distinguish these theories we taught participants novel words (e.g., "formubo") that sound like existing words ("formula") on two successive days. Computational simulations show that knowing "formubo" increases lexical competition when hearing "formu…", but reduces segment prediction error. Conversely, when the sounds in "formula" and "formubo" diverge, the reverse is observed. The time course of magnetoencephalographic brain responses in the superior temporal gyrus (STG) is uniquely consistent with a segment prediction account. We propose a predictive coding model of spoken word recognition in which STG neurons represent the difference between predicted and heard speech sounds. This prediction error signal explains the efficiency of human word recognition and simulates neural responses in auditory regions.  相似文献   

10.
The detection of a change in the modulation pattern of a (target) carrier frequency, fc (for example a change in the depth of amplitude or frequency modulation, AM or FM) can be adversely affected by the presence of other modulated sounds (maskers) at frequencies remote from fc, an effect called modulation discrimination interference (MDI). MDI cannot be explained in terms of interaction of the sounds in the peripheral auditory system. It may result partly from a tendency for sounds which are modulated in a similar way to be perceptually 'grouped', i.e. heard as a single sound. To test this idea, MDI for the detection of a change in AM depth was measured as a function of stimulus variables known to affect perceptual grouping, namely overall duration and onset and offset asynchrony between the masking and target sounds. In parallel experiments, subjects were presented with a series of pairs of sounds, the target alone and the target with maskers, and were asked to rate how clearly the modulation of the target could be heard in the complex mixture. The results suggest that two factors contribute to MDI. One factor is difficulty in hearing a pitch corresponding to the target frequency. This factor appears to be strongly affected by perceptual grouping. Its effects can be reduced or abolished by asynchronous gating of the target and masker. The second factor is a specific difficulty in hearing the modulation of the target, or in distinguishing that modulation from the modulation of other sounds that are present. This factor has effects even under conditions promoting perceptual segregation of the target and masker.  相似文献   

11.

Background

Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

Methodology/Principal Findings

In the present study, we manipulated auditory feedback during speech production in a group of 9–11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

Conclusions

The results indicate that 9–11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children''s perceptual representations of speech sound categories.  相似文献   

12.
Listening to speech in the presence of other sounds   总被引:1,自引:0,他引:1  
Although most research on the perception of speech has been conducted with speech presented without any competing sounds, we almost always listen to speech against a background of other sounds which we are adept at ignoring. Nevertheless, such additional irrelevant sounds can cause severe problems for speech recognition algorithms and for the hard of hearing as well as posing a challenge to theories of speech perception. A variety of different problems are created by the presence of additional sound sources: detection of features that are partially masked, allocation of detected features to the appropriate sound sources and recognition of sounds on the basis of partial information. The separation of sounds is arousing substantial attention in psychoacoustics and in computer science. An effective solution to the problem of separating sounds would have important practical applications.  相似文献   

13.
Prelingually deafened children with cochlear implants stand a good chance of developing satisfactory speech performance. Nevertheless, their eventual language performance is highly variable and not fully explainable by the duration of deafness and hearing experience. In this study, two groups of cochlear implant users (CI groups) with very good basic hearing abilities but non-overlapping speech performance (very good or very bad speech performance) were matched according to hearing age and age at implantation. We assessed whether these CI groups differed with regard to their phoneme discrimination ability and auditory sensory memory capacity, as suggested by earlier studies. These functions were measured behaviorally and with the Mismatch Negativity (MMN). Phoneme discrimination ability was comparable in the CI group of good performers and matched healthy controls, which were both better than the bad performers. Source analyses revealed larger MMN activity (155–225 ms) in good than in bad performers, which was generated in the frontal cortex and positively correlated with measures of working memory. For the bad performers, this was followed by an increased activation of left temporal regions from 225 to 250 ms with a focus on the auditory cortex. These results indicate that the two CI groups developed different auditory speech processing strategies and stress the role of phonological functions of auditory sensory memory and the prefrontal cortex in positively developing speech perception and production.  相似文献   

14.
目的:探讨单侧人工耳蜗植入(cochlear implantation,CI)对学龄前耳聋儿童听觉语言康复的治疗效果以及相关影响因素。方法:将我院自2017年1月至2017年12月行CI治疗的学龄前儿童72例行作为研究对象,通过问卷调查手术患儿的相关资料,对可能影响患儿听觉言语康复效果的因素和听觉行为分级(Categories of auditory performance,CAP)以及言语可懂程度分级(Speech intelligibility rating,SIR)结果进行二分类变量的单因素分析,再进行多分类变量的Logistic回归分析评估患儿的治疗效果和影响康复效果的因素。结果:耳聋患儿CI植入年龄、术前平均残余听力、术前佩戴助听器时间、使用人工耳蜗时间和术后语训时间等因素和CAP增长倍数之间有明显的相关性(P0.05),除了上述因素之外还有术前语训时间等因素与治疗后患儿SIR增长倍数存在相关性(P0.05);CI植入年龄、术前平均残余听力和术前佩戴助听器时间对患儿术后CAP的恢复具有影响(P0.05);CI植入年龄、术前佩戴助听器时间、术前语训时间等因素对患儿SIR恢复产生影响(P0.05)。结论:患儿植入人工耳蜗的年龄、术前平均残余听力、术前佩戴助听器时间和术前言语训练时间是影响学龄前耳聋患儿术后听力言语功能恢复的主要因素。  相似文献   

15.
It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.  相似文献   

16.
The coding of complex sounds in the early auditory system has a 'standard model' based on the known physiology of the cochlea and main brainstem pathways. This model accounts for a wide range of perceptual capabilities. It is generally accepted that high cortical areas encode abstract qualities such as spatial location or speech sound identity. Between the early and late auditory system, the role of primary auditory cortex (A1) is still debated. A1 is clearly much more than a 'whiteboard' of acoustic information-neurons in A1 have complex response properties, showing sensitivity to both low-level and high-level features of sounds.  相似文献   

17.
Maruska KP  Ung US  Fernald RD 《PloS one》2012,7(5):e37612
Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2-5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes.  相似文献   

18.
Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.  相似文献   

19.
In reverberant rooms with multiple-people talking, spatial separation between speech sources improves recognition of attended speech, even though both the head-shadowing and interaural-interaction unmasking cues are limited by numerous reflections. It is the perceptual integration between the direct wave and its reflections that bridges the direct-reflection temporal gaps and results in the spatial unmasking under reverberant conditions. This study further investigated (1) the temporal dynamic of the direct-reflection-integration-based spatial unmasking as a function of the reflection delay, and (2) whether this temporal dynamic is correlated with the listeners’ auditory ability to temporally retain raw acoustic signals (i.e., the fast decaying primitive auditory memory, PAM). The results showed that recognition of the target speech against the speech-masker background is a descending exponential function of the delay of the simulated target reflection. In addition, the temporal extent of PAM is frequency dependent and markedly longer than that for perceptual fusion. More importantly, the temporal dynamic of the speech-recognition function is significantly correlated with the temporal extent of the PAM of low-frequency raw signals. Thus, we propose that a chain process, which links the earlier-stage PAM with the later-stage correlation computation, perceptual integration, and attention facilitation, plays a role in spatially unmasking target speech under reverberant conditions.  相似文献   

20.
Perception and discrimination of auditory and speech stimuli in children aged 7-9 years with either receptive (n=6) or expressive (n=5) type of special language impairment and 7 healthy age-matched controls was investigated using evoked potential technique. The measurements were performed with a 32-channel Neuroscan electroencephalographic system. Two types of stimuli were applied, pure tones (1 kHz and 2 kHz) and double syllabi consisting of one consonant and one vocal characteristic of Croatian language. The stimuli were presented in an oddball paradigm, requiring a conscious reaction for the subjects. Latencies and amplitudes of P1, N1, P2, N2, P3, N4, and SW waves were analized, as well as the reaction time and number of responses. There were found no statistically significant difference between children with special language impairment and the control group in average response time and number of responses to tone burst or double syllable. Analysis of variance of all used variables showed a statistically significant difference in P3 and Sw wave latencies after double syllable stimulation, P3 and N4 waves latencies after target stimulation, P2 and Sw wave amplitude; and in N1 wave amplitude after pure tone stimulation. Our study showed that children with speech and language disorder take longer time to perceive and discriminate between either tonal or speech auditory stimuli than children with typical speech and language development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号