首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Objective

To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users.

Methods

Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed.

Results

Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032).

Conclusion

Webcameras have the potential to improve telecommunication of hearing-impaired individuals.  相似文献   

2.
For the perception of timbre of a musical instrument, the attack time is known to hold crucial information. The first 50 to 150 ms of sound onset reflect the excitation mechanism, which generates the sound. Since auditory processing and music perception in particular are known to be hampered in cochlear implant (CI) users, we conducted an electroencephalography (EEG) study with an oddball paradigm to evaluate the processing of small differences in musical sound onset. The first 60 ms of a cornet sound were manipulated in order to examine whether these differences are detected by CI users and normal-hearing controls (NH controls), as revealed by auditory evoked potentials (AEPs). Our analysis focused on the N1 as an exogenous component known to reflect physical stimuli properties as well as on the P2 and the Mismatch Negativity (MMN). Our results revealed different N1 latencies as well as P2 amplitudes and latencies for the onset manipulations in both groups. An MMN could be elicited only in the NH control group. Together with additional findings that suggest an impact of musical training on CI users’ AEPs, our findings support the view that impaired timbre perception in CI users is at partly due to altered sound onset feature detection.  相似文献   

3.
ObjectivesPrevious studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users.DesignSpeech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated.ResultsSRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type.ConclusionsCI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit.  相似文献   

4.

Objective

To investigate the performance of monaural and binaural beamforming technology with an additional noise reduction algorithm, in cochlear implant recipients.

Method

This experimental study was conducted as a single subject repeated measures design within a large German cochlear implant centre. Twelve experienced users of an Advanced Bionics HiRes90K or CII implant with a Harmony speech processor were enrolled. The cochlear implant processor of each subject was connected to one of two bilaterally placed state-of-the-art hearing aids (Phonak Ambra) providing three alternative directional processing options: an omnidirectional setting, an adaptive monaural beamformer, and a binaural beamformer. A further noise reduction algorithm (ClearVoice) was applied to the signal on the cochlear implant processor itself. The speech signal was presented from 0° and speech shaped noise presented from loudspeakers placed at ±70°, ±135° and 180°. The Oldenburg sentence test was used to determine the signal-to-noise ratio at which subjects scored 50% correct.

Results

Both the adaptive and binaural beamformer were significantly better than the omnidirectional condition (5.3 dB±1.2 dB and 7.1 dB±1.6 dB (p<0.001) respectively). The best score was achieved with the binaural beamformer in combination with the ClearVoice noise reduction algorithm, with a significant improvement in SRT of 7.9 dB±2.4 dB (p<0.001) over the omnidirectional alone condition.

Conclusions

The study showed that the binaural beamformer implemented in the Phonak Ambra hearing aid could be used in conjunction with a Harmony speech processor to produce substantial average improvements in SRT of 7.1 dB. The monaural, adaptive beamformer provided an averaged SRT improvement of 5.3 dB.  相似文献   

5.

Rationale

Previous cochlear implant (CI) studies have shown that single-channel amplitude modulation frequency discrimination (AMFD) can be improved when coherent modulation is delivered to additional channels. It is unclear whether the multi-channel advantage is due to increased loudness, multiple envelope representations, or to component channels with better temporal processing. Measuring envelope interference may shed light on how modulated channels can be combined.

Methods

In this study, multi-channel AMFD was measured in CI subjects using a 3-alternative forced-choice, non-adaptive procedure (“which interval is different?”). For the reference stimulus, the reference AM (100 Hz) was delivered to all 3 channels. For the probe stimulus, the target AM (101, 102, 104, 108, 116, 132, 164, 228, or 256 Hz) was delivered to 1 of 3 channels, and the reference AM (100 Hz) delivered to the other 2 channels. The spacing between electrodes was varied to be wide or narrow to test different degrees of channel interaction.

Results

Results showed that CI subjects were highly sensitive to interactions between the reference and target envelopes. However, performance was non-monotonic as a function of target AM frequency. For the wide spacing, there was significantly less envelope interaction when the target AM was delivered to the basal channel. For the narrow spacing, there was no effect of target AM channel. The present data were also compared to a related previous study in which the target AM was delivered to a single channel or to all 3 channels. AMFD was much better with multiple than with single channels whether the target AM was delivered to 1 of 3 or to all 3 channels. For very small differences between the reference and target AM frequencies (2–4 Hz), there was often greater sensitivity when the target AM was delivered to 1 of 3 channels versus all 3 channels, especially for narrowly spaced electrodes.

Conclusions

Besides the increased loudness, the present results also suggest that multiple envelope representations may contribute to the multi-channel advantage observed in previous AMFD studies. The different patterns of results for the wide and narrow spacing suggest a peripheral contribution to multi-channel temporal processing. Because the effect of target AM frequency was non-monotonic in this study, adaptive procedures may not be suitable to measure AMFD thresholds with interfering envelopes. Envelope interactions among multiple channels may be quite complex, depending on the envelope information presented to each channel and the relative independence of the stimulated channels.  相似文献   

6.
Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was not different from the controls. In addition, the extent of their central VF was positively correlated with speech perception outcome (r = 0.669, P = 0.009). Persistent visual activation in right temporal cortex even after CI causes negative effect on outcome in post-lingual deaf adults. We interpret these results to suggest that insufficient intra-modal (visual) compensation by the occipital cortex may cause negative effects on outcome. Based on our results, it appears that a narrowed central VF could help identify CI users with poor outcomes with their device.  相似文献   

7.
The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the cortical level in speech perception.  相似文献   

8.
A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.  相似文献   

9.
Nucleus cochlear implant systems incorporate a fast-acting front-end automatic gain control (AGC), sometimes called a compression limiter. The objective of the present study was to determine the effect of replacing the front-end compression limiter with a newly proposed envelope profile limiter. A secondary objective was to investigate the effect of AGC speed on cochlear implant speech intelligibility. The envelope profile limiter was located after the filter bank and reduced the gain when the largest of the filter bank envelopes exceeded the compression threshold. The compression threshold was set equal to the saturation level of the loudness growth function (i.e. the envelope level that mapped to the maximum comfortable current level), ensuring that no envelope clipping occurred. To preserve the spectral profile, the same gain was applied to all channels. Experiment 1 compared sentence recognition with the front-end limiter and with the envelope profile limiter, each with two release times (75 and 625 ms). Six implant recipients were tested in quiet and in four-talker babble noise, at a high presentation level of 89 dB SPL. Overall, release time had a larger effect than the AGC type. With both AGC types, speech intelligibility was lower for the 75 ms release time than for the 625 ms release time. With the shorter release time, the envelope profile limiter provided higher group mean scores than the front-end limiter in quiet, but there was no significant difference in noise. Experiment 2 measured sentence recognition in noise as a function of presentation level, from 55 to 89 dB SPL. The envelope profile limiter with 625 ms release time yielded better scores than the front-end limiter with 75 ms release time. A take-home study showed no clear pattern of preferences. It is concluded that the envelope profile limiter is a feasible alternative to a front-end compression limiter.  相似文献   

10.

Objectives

(1) To evaluate the recognition of words, phonemes and lexical tones in audiovisual (AV) and auditory-only (AO) modes in Mandarin-speaking adults with cochlear implants (CIs); (2) to understand the effect of presentation levels on AV speech perception; (3) to learn the effect of hearing experience on AV speech perception.

Methods

Thirteen deaf adults (age = 29.1±13.5 years; 8 male, 5 female) who had used CIs for >6 months and 10 normal-hearing (NH) adults participated in this study. Seven of them were prelingually deaf, and 6 postlingually deaf. The Mandarin Monosyllablic Word Recognition Test was used to assess recognition of words, phonemes and lexical tones in AV and AO conditions at 3 presentation levels: speech detection threshold (SDT), speech recognition threshold (SRT) and 10 dB SL (re:SRT).

Results

The prelingual group had better phoneme recognition in the AV mode than in the AO mode at SDT and SRT (both p = 0.016), and so did the NH group at SDT (p = 0.004). Mode difference was not noted in the postlingual group. None of the groups had significantly different tone recognition in the 2 modes. The prelingual and postlingual groups had significantly better phoneme and tone recognition than the NH one at SDT in the AO mode (p = 0.016 and p = 0.002 for phonemes; p = 0.001 and p<0.001 for tones) but were outperformed by the NH group at 10 dB SL (re:SRT) in both modes (both p<0.001 for phonemes; p<0.001 and p = 0.002 for tones). The recognition scores had a significant correlation with group with age and sex controlled (p<0.001).

Conclusions

Visual input may help prelingually deaf implantees to recognize phonemes but may not augment Mandarin tone recognition. The effect of presentation level seems minimal on CI users'' AV perception. This indicates special considerations in developing audiological assessment protocols and rehabilitation strategies for implantees who speak tonal languages.  相似文献   

11.
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners.  相似文献   

12.
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.  相似文献   

13.
Outer hair cell (OHC) or prestin-based electromotility is an active cochlear amplifier in the mammalian inner ear that can increase hearing sensitivity and frequency selectivity. In situ, Deiters supporting cells are well-coupled by gap junctions and constrain OHCs standing on the basilar membrane. Here, we report that both electrical and mechanical stimulations in Deiters cells (DCs) can modulate OHC electromotility. There was no direct electrical conductance between the DCs and the OHCs. However, depolarization in DCs reduced OHC electromotility associated nonlinear capacitance (NLC) and distortion products. Increase in the turgor pressure of DCs also shifted OHC NLC to the negative voltage direction. Destruction of the cytoskeleton in DCs or dissociation of the mechanical-coupling between DCs and OHCs abolished these effects, indicating the modulation through the cytoskeleton activation and DC-OHC mechanical coupling rather than via electric field potentials. We also found that changes in gap junctional coupling between DCs induced large membrane potential and current changes in the DCs and shifted OHC NLC. Uncoupling of gap junctions between DCs shifted NLC to the negative direction. These data indicate that DCs not only provide a physical scaffold to support OHCs but also can directly modulate OHC electromotility through the DC-OHC mechanical coupling. Our findings reveal a new mechanism of cochlear supporting cells and gap junctional coupling to modulate OHC electromotility and eventually hearing sensitivity in the inner ear.  相似文献   

14.
Oscillatory neuronal synchronization between cortical areas has been suggested to constitute a flexible mechanism to coordinate information flow in the human cerebral cortex. However, it remains unclear whether synchronized neuronal activity merely represents an epiphenomenon or whether it is causally involved in the selective gating of information. Here, we combined bilateral high-density transcranial alternating current stimulation (HD-tACS) at 40 Hz with simultaneous electroencephalographic (EEG) recordings to study immediate electrophysiological effects during the selective entrainment of oscillatory gamma-band signatures. We found that interhemispheric functional connectivity was modulated in a predictable, phase-specific way: In-phase stimulation enhanced synchronization, anti-phase stimulation impaired functional coupling. Perceptual correlates of these connectivity changes were found in an ambiguous motion task, which strongly support the functional relevance of long-range neuronal coupling. Additionally, our results revealed a decrease in oscillatory alpha power in response to the entrainment of gamma band signatures. This finding provides causal evidence for the antagonistic role of alpha and gamma oscillations in the parieto-occipital cortex and confirms that the observed gamma band modulations were physiological in nature. Our results demonstrate that synchronized cortical network activity across several spatiotemporal scales is essential for conscious perception and cognition.  相似文献   

15.
The operation of the mammalian cochlea relies on a mechanical traveling wave that is actively boosted by electromechanical forces in sensory outer hair cells (OHCs). This active cochlear amplifier produces the impressive sensitivity and frequency resolution of mammalian hearing. The cochlear amplifier has inspired scientists since its discovery in the 1970s, and is still not well understood. To explore cochlear electromechanics at the sensory cell/tissue interface, sound-evoked intracochlear pressure and extracellular voltage were measured using a recently developed dual-sensor with a microelectrode attached to a micro-pressure sensor. The resulting coincident in vivo observations of OHC electrical activity, pressure at the basilar membrane and basilar membrane displacement gave direct evidence for power amplification in the cochlea. Moreover, the results showed a phase shift of voltage relative to mechanical responses at frequencies slightly below the peak, near the onset of amplification. Based on the voltage-force relationship of isolated OHCs, the shift would give rise to effective OHC pumping forces within the traveling wave peak. Thus, the shift activates the cochlear amplifier, serving to localize and thus sharpen the frequency region of amplification. These results are the most concrete evidence for cochlear power amplification to date and support OHC somatic forces as its source.  相似文献   

16.
The operation of the mammalian cochlea relies on a mechanical traveling wave that is actively boosted by electromechanical forces in sensory outer hair cells (OHCs). This active cochlear amplifier produces the impressive sensitivity and frequency resolution of mammalian hearing. The cochlear amplifier has inspired scientists since its discovery in the 1970s, and is still not well understood. To explore cochlear electromechanics at the sensory cell/tissue interface, sound-evoked intracochlear pressure and extracellular voltage were measured using a recently developed dual-sensor with a microelectrode attached to a micro-pressure sensor. The resulting coincident in vivo observations of OHC electrical activity, pressure at the basilar membrane and basilar membrane displacement gave direct evidence for power amplification in the cochlea. Moreover, the results showed a phase shift of voltage relative to mechanical responses at frequencies slightly below the peak, near the onset of amplification. Based on the voltage-force relationship of isolated OHCs, the shift would give rise to effective OHC pumping forces within the traveling wave peak. Thus, the shift activates the cochlear amplifier, serving to localize and thus sharpen the frequency region of amplification. These results are the most concrete evidence for cochlear power amplification to date and support OHC somatic forces as its source.  相似文献   

17.
For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2) information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects’ HA-aided pure-tone average (PTA) thresholds between 250 and 2000 Hz; subjects were divided into two groups: “better” PTA (<50 dB HL) or “poorer” PTA (>50 dB HL). The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12), further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception.  相似文献   

18.

Objectives

To investigate speech and language outcomes in children with cochlear implants (CIs) who had mutations in common deafness genes and to compare their performances with those without mutations.

Study Design

Prospective study.

Methods

Patients who received CIs before 18 years of age and had used CIs for more than 3 years were enrolled in this study. All patients underwent mutation screening of three common deafness genes: GJB2, SLC26A4 and the mitochondrial 12S rRNA gene. The outcomes with CIs were assessed at post-implant years 3 and 5 using the Categories of Auditory Performance (CAP) scale, Speech Intelligibility Rating (SIR) scale, speech perception tests and language skill tests.

Results

Forty-eight patients were found to have confirmative mutations in GJB2 or SLC26A4, and 123 without detected mutations were ascertained for comparison. Among children who received CIs before 3.5 years of age, patients with GJB2 or SLC26A4 mutations showed significantly higher CAP/SIR scores than those without mutations at post-implant year 3 (p = 0.001 for CAP; p = 0.004 for SIR) and year 5 (p = 0.035 for CAP; p = 0.038 for SIR). By contrast, among children who received CIs after age 3.5, no significant differences were noted in post-implant outcomes between patients with and without mutations (all p > 0.05).

Conclusion

GJB2 and SLC26A4 mutations are associated with good post-implant outcomes. However, their effects on CI outcomes may be modulated by the age at implantation: the association between mutations and CI outcomes is observed in young recipients who received CIs before age 3.5 years but not in older recipients.  相似文献   

19.
20.
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号