共查询到20条相似文献,搜索用时 0 毫秒
1.
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception. 相似文献
2.
A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices. 相似文献
3.
4.
5.
Individual differences in second language (L2) phoneme perception (within the normal population) have been related to speech perception abilities, also observed in the native language, in studies assessing the electrophysiological response mismatch negativity (MMN). Here, we investigate the brain oscillatory dynamics in the theta band, the spectral correlate of the MMN, that underpin success in phoneme learning. Using previous data obtained in an MMN paradigm, the dynamics of cortical oscillations while perceiving native and unknown phonemes and nonlinguistic stimuli were studied in two groups of participants classified as good and poor perceivers (GPs and PPs), according to their L2 phoneme discrimination abilities. The results showed that for GPs, as compared to PPs, processing of a native phoneme change produced a significant increase in theta power. Stimulus time-locked analysis event-related spectral perturbation (ERSP) showed differences for the theta band within the MMN time window (between 70 and 240 ms) for the native deviant phoneme. No other significant difference between the two groups was observed for the other phoneme or nonlinguistic stimuli. The dynamic patterns in the theta-band may reflect early automatic change detection for familiar speech sounds in the brain. The behavioral differences between the two groups may reflect individual variations in activating brain circuits at a perceptual level. 相似文献
6.
S. M. Petrov 《Human physiology》2003,29(1):17-20
It was found that, at a test bandwidth range of 50 Hz, 100% speech intelligibility is retained in naive subjects when, on average, 950 Hz is removed from each subsequent 1000-Hz bandwidth. Thus, speech is 95% redundant with respect to the spectral content. The parameters of the comb filter were chosen from measurements of speech intelligibility in experienced subjects, at which no one subject with normal hearing taking part in the experiment for the first time exhibited 100% intelligibility. Two methods of learning to perceive spectrally deprived speech signals are compared: (1) aurally only and (2) with visual enhancement. In the latter case, speech intelligibility is significantly higher. The possibility of using a spectrally deprived speech signal to develop and assess the efficiency of auditory rehabilitation of implanted patients is discussed. 相似文献
7.
8.
Jong Ho Won Il Joon Moon Sunhwa Jin Heesung Park Jihwan Woo Yang-Sun Cho Won-Ho Chung Sung Hwa Hong 《PloS one》2015,10(10)
Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test–retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information. 相似文献
9.
David L. Woods Tanya Arbogast Zoe Doss Masood Younus Timothy J. Herron E. William Yund 《PloS one》2015,10(3)
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. 相似文献
10.
11.
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. 相似文献
12.
Georgios Mantokoudis Claudia D?hler Patrick Dubach Martin Kompis Marco D. Caversaccio Pascal Senn 《PloS one》2013,8(1)
Objective
To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users.Methods
Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed.Results
Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032).Conclusion
Webcameras have the potential to improve telecommunication of hearing-impaired individuals. 相似文献13.
14.
The processing of continuous and complex auditory signals such as speech relies on the ability to use statistical cues (e.g. transitional probabilities). In this study, participants heard short auditory sequences composed either of Italian syllables or bird songs and completed a regularity-rating task. Behaviorally, participants were better at differentiating between levels of regularity in the syllable sequences than in the bird song sequences. Inter-individual differences in sensitivity to regularity for speech stimuli were correlated with variations in surface-based cortical thickness (CT). These correlations were found in several cortical areas including regions previously associated with statistical structure processing (e.g. bilateral superior temporal sulcus, left precentral sulcus and inferior frontal gyrus), as well other regions (e.g. left insula, bilateral superior frontal gyrus/sulcus and supramarginal gyrus). In all regions, this correlation was positive suggesting that thicker cortex is related to higher sensitivity to variations in the statistical structure of auditory sequences. Overall, these results suggest that inter-individual differences in CT within a distributed network of cortical regions involved in statistical structure processing, attention and memory is predictive of the ability to detect structural structure in auditory speech sequences. 相似文献
15.
Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken. 相似文献
16.
Di Chen Yubing Sun Madhu?S.R. Gudur Yi-Sing Hsiao Ziqi Wu Jianping Fu Cheri?X. Deng 《Biophysical journal》2015,108(1):32-42
The study of mechanotransduction relies on tools that are capable of applying mechanical forces to elicit and assess cellular responses. Here we report a new (to our knowledge) technique, called two-bubble acoustic tweezing cytometry (TB-ATC), for generating spatiotemporally controlled subcellular mechanical forces on live cells by acoustic actuation of paired microbubbles targeted to the cell adhesion receptor integrin. By measuring the ultrasound-induced activities of cell-bound microbubbles and the actin cytoskeleton contractile force responses, we determine that TB-ATC elicits mechanoresponsive cellular changes via cyclic, paired displacements of integrin-bound microbubbles driven by the attractive secondary acoustic radiation force (sARF) between the bubbles in an ultrasound field. We demonstrate the feasibility of dual-mode TB-ATC for both subcellular probing and mechanical stimulation. By exploiting the robust and unique interaction of ultrasound with microbubbles, TB-ATC provides distinct advantages for experimentation and quantification of applied forces and cellular responses for biomechanical probing and stimulation of cells. 相似文献
17.
Nonnative speech poses a challenge to speech perception, especially in challenging listening environments. Audiovisual (AV) cues are known to improve native speech perception in noise. The extent to which AV cues benefit nonnative speech perception in noise, however, is much less well-understood. Here, we examined native American English-speaking and native Korean-speaking listeners'' perception of English sentences produced by a native American English speaker and a native Korean speaker across a range of signal-to-noise ratios (SNRs;−4 to −20 dB) in audio-only and audiovisual conditions. We employed psychometric function analyses to characterize the pattern of AV benefit across SNRs. For native English speech, the largest AV benefit occurred at intermediate SNR (i.e. −12 dB); but for nonnative English speech, the largest AV benefit occurred at a higher SNR (−4 dB). The psychometric function analyses demonstrated that the AV benefit patterns were different between native and nonnative English speech. The nativeness of the listener exerted negligible effects on the AV benefit across SNRs. However, the nonnative listeners'' ability to gain AV benefit in native English speech was related to their proficiency in English. These findings suggest that the native language background of both the speaker and listener clearly modulate the optimal use of AV cues in speech recognition. 相似文献
18.
《Journal of Russian & East European Psychology》2013,51(2):11-17
Two questions remain virtually unexplored in the problem of the significance of speech for perception: the significance of speech for perception and reproduction of individual aspects of a complex entity (the number of elements of which it is comprised, their color and disposition), and the features of the connection between words and these elements. The latter question requires some explanation. There are objects whose names we employ very frequently in conversation (table, chair, etc.). There is a particularly close relationship between the visual image of such objects and the words. But at the same time, there are quite a number of objects (certain types of uncommon colors, birds, details of instruments, etc.) the names of which many people do not know. Further, certain details have no special names at all (for example, particular details of ornaments). A. G. Ivanov-Smolenskii, in his article "The Interaction of the First and Second Signal Systems Under Certain Physiological and Pathological Conditions" [O vzaimodeistvii pervoi i vtoroi signal'nykh sistem pri nekotorykh fiziologicheskikh i patologicheskikh usloviiakh], The Physiological Journal, USSR Academy of Sciences [Fiziologicheskii zhurnal AN SSSR], 1949, No. 5, wrote: "Some individually distinct part of experience is always found — for a while — to be untransmitted to the second signal system, and not yet subject to verbal interpretation and verbal formulation ('unverbalized')." 相似文献
19.
One of the putative functions of the medial olivocochlear (MOC) system is to enhance signal detection in noise. The objective of this study was to elucidate the role of the MOC system in speech perception in noise. In normal-hearing human listeners, we examined (1) the association between magnitude of MOC inhibition and speech-in-noise performance, and (2) the association between MOC inhibition and the amount of contralateral acoustic stimulation (CAS)-induced shift in speech-in-noise acuity. MOC reflex measurements in this study considered critical measurement issues overlooked in past work by: recording relatively low-level, linear click-evoked otoacoustic emissions (CEOAEs), adopting 6 dB signal-to-noise ratio (SNR) criteria, and computing normalized CEOAE differences. We found normalized index to be a stable measure of MOC inhibition (mean = 17.21%). MOC inhibition was not related to speech-in-noise performance measured without CAS. However, CAS in a speech-in-noise task caused an SNRSP enhancement (mean = 2.45 dB), and this improvement in speech-in-noise acuity was directly related to their MOC reflex assayed by CEOAEs. Individuals do not necessarily use the available MOC-unmasking characteristic while listening to speech in noise, or do not utilize unmasking to the extent that can be shown by artificial MOC activation. It may be the case that the MOC is not actually used under natural listening conditions and the higher auditory centers recruit MOC-mediated mechanisms only in specific listening conditions–those conditions remain to be investigated. 相似文献
20.
Fawen Zhang Chelsea Benson Dora Murphy Melissa Boian Michael Scott Robert Keith Jing Xiang Paul Abbas 《PloS one》2013,8(12)
The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the cortical level in speech perception. 相似文献