首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.  相似文献   

2.
Some hearing-impaired persons with hearing aids complain of listening difficulty under reverberation. No method, however, is currently available for hearing aid fitting that permits evaluation of hearing difficulty caused by reverberations. In this study, we produced speech materials with a reverberation time of 2.02 s that mimicked a reverberant environment (a classroom). Speech materials with reverberation times of 0 and 1.01 s were also made. Listening tests were performed with these materials in hearing-impaired subjects and normal-hearing subjects in a soundproof booth. Listening tests were also done in a classroom. Our results showed that speech material with a reverberation time of 2.02 s had a decreased listening-test score in hearing-impaired subjects with both monaural and binaural hearing aids. Similar results were obtained in a reverberant environment. Our findings suggest the validity of using speech materials with different reverberation times to predict the listening performance under reverberation of hearing-impaired persons with hearing aids.  相似文献   

3.
Whereas the use of discrete pitch intervals is characteristic of most musical traditions, the size of the intervals and the way in which they are used is culturally specific. Here we examine the hypothesis that these differences arise because of a link between the tonal characteristics of a culture's music and its speech. We tested this idea by comparing pitch intervals in the traditional music of three tone language cultures (Chinese, Thai and Vietnamese) and three non-tone language cultures (American, French and German) with pitch intervals between voiced speech segments. Changes in pitch direction occur more frequently and pitch intervals are larger in the music of tone compared to non-tone language cultures. More frequent changes in pitch direction and larger pitch intervals are also apparent in the speech of tone compared to non-tone language cultures. These observations suggest that the different tonal preferences apparent in music across cultures are closely related to the differences in the tonal characteristics of voiced speech.  相似文献   

4.
Over the last three years of hearing aid dispensing, it was observed that among 74 subjects fitted with a linear octave frequency transposition (LOFT) hearing aid, 60 reported partial or complete tinnitus suppression during day and night, an effect still lasting after several months or years of daily use. We report in more details on 38 subjects from whom we obtained quantified measures of tinnitus suppression through visual analog scaling and several additional psychoacoustic and audiometric measures. The long-term suppression seems independent of subject age, and of duration and subjective localization of tinnitus. A small but significant correlation was found with audiogram losses but not with high frequency loss slope. Long-term tinnitus suppression was observed for different etiologies, but with a low success rate for sudden deafness. It should be noted that a majority of subjects (23) had a history of noise exposure. Tinnitus suppression started after a few days of LOFT hearing aid use and reached a maximum after a few weeks of daily use. For nine subjects different amounts of frequency shifting were tried and found more or less successful for long-term tinnitus suppression, no correlation was found with tinnitus pitch. When the use of the LOFT hearing aid was stopped tinnitus reappeared within a day, and after re-using the LOFT aid it disappeared again within a day. For about one third of the 38 subjects a classical amplification or a non linear frequency compression aid was also tried, and no such tinnitus suppression was observed. Besides improvements in audiometric sensitivity to high frequencies and in speech discrimination scores, LOFT can be considered as a remarkable opportunity to suppress tinnitus over a long time scale. From a pathophysiological viewpoint these observations seem to fit with a possible re-attribution of activity to previously deprived cerebral areas corresponding to high frequency coding.  相似文献   

5.
This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.  相似文献   

6.
The performance of objective speech and audio quality measures for the prediction of the perceived quality of frequency-compressed speech in hearing aids is investigated in this paper. A number of existing quality measures have been applied to speech signals processed by a hearing aid, which compresses speech spectra along frequency in order to make information contained in higher frequencies audible for listeners with severe high-frequency hearing loss. Quality measures were compared with subjective ratings obtained from normal hearing and hearing impaired children and adults in an earlier study. High correlations were achieved with quality measures computed by quality models that are based on the auditory model of Dau et al., namely, the measure PSM, computed by the quality model PEMO-Q; the measure qc, computed by the quality model proposed by Hansen and Kollmeier; and the linear subcomponent of the HASQI. For the prediction of quality ratings by hearing impaired listeners, extensions of some models incorporating hearing loss were implemented and shown to achieve improved prediction accuracy. Results indicate that these objective quality measures can potentially serve as tools for assisting in initial setting of frequency compression parameters.  相似文献   

7.
Neural encoding of temporal speech features is a key component of acoustic and phonetic analyses. We examined the temporal encoding of the syllables /da/ and /ta/, which differ along the temporally based, phonetic parameter of voice onset time (VOT), in primary auditory cortex (A1) of awake monkeys using concurrent multilaminar recordings of auditory evoked potentials (AEP), the derived current source density, and multiunit activity. A general sequence of A1 activation consisting of a lamina-specific profile of parallel and sequential excitatory and inhibitory processes is described. VOT is encoded in the temporal response patterns of phase-locked activity to the periodic speech segments and by “on” responses to stimulus and voicing onset. A transformation occurs between responses in the thalamocortical (TC) fiber input and A1 cells. TC fibers are more likely to encode VOT with “on” responses to stimulus onset followed by phase-locked responses during the voiced segment, whereas A1 responses are more likely to exhibit transient responses both to stimulus and voicing onset. Relevance to subcortical speech processing, the human AEP and speech psychoacoustics are discussed. A mechanism for categorical differentiation of voiced and unvoiced consonants is proposed.  相似文献   

8.
It was found that, at a test bandwidth range of 50 Hz, 100% speech intelligibility is retained in naive subjects when, on average, 950 Hz is removed from each subsequent 1000-Hz bandwidth. Thus, speech is 95% redundant with respect to the spectral content. The parameters of the comb filter were chosen from measurements of speech intelligibility in experienced subjects, at which no one subject with normal hearing taking part in the experiment for the first time exhibited 100% intelligibility. Two methods of learning to perceive spectrally deprived speech signals are compared: (1) aurally only and (2) with visual enhancement. In the latter case, speech intelligibility is significantly higher. The possibility of using a spectrally deprived speech signal to develop and assess the efficiency of auditory rehabilitation of implanted patients is discussed.  相似文献   

9.
Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.  相似文献   

10.
In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.  相似文献   

11.
目的:探讨单侧人工耳蜗植入(cochlear implantation,CI)对学龄前耳聋儿童听觉语言康复的治疗效果以及相关影响因素。方法:将我院自2017年1月至2017年12月行CI治疗的学龄前儿童72例行作为研究对象,通过问卷调查手术患儿的相关资料,对可能影响患儿听觉言语康复效果的因素和听觉行为分级(Categories of auditory performance,CAP)以及言语可懂程度分级(Speech intelligibility rating,SIR)结果进行二分类变量的单因素分析,再进行多分类变量的Logistic回归分析评估患儿的治疗效果和影响康复效果的因素。结果:耳聋患儿CI植入年龄、术前平均残余听力、术前佩戴助听器时间、使用人工耳蜗时间和术后语训时间等因素和CAP增长倍数之间有明显的相关性(P0.05),除了上述因素之外还有术前语训时间等因素与治疗后患儿SIR增长倍数存在相关性(P0.05);CI植入年龄、术前平均残余听力和术前佩戴助听器时间对患儿术后CAP的恢复具有影响(P0.05);CI植入年龄、术前佩戴助听器时间、术前语训时间等因素对患儿SIR恢复产生影响(P0.05)。结论:患儿植入人工耳蜗的年龄、术前平均残余听力、术前佩戴助听器时间和术前言语训练时间是影响学龄前耳聋患儿术后听力言语功能恢复的主要因素。  相似文献   

12.
Nowadays, great attention is devoted to minimizing the discomfort caused by connection of patients to sensors for long-term monitoring of physiological parameters. Hence, the need for contact-less monitoring systems is increasingly recognized in clinical investigation. To this aim, audio signals recorded by ambient microphones are an appealing and increasing field of research: in the biomedical field, application of contact-less audio recording of long duration may concern obstructive apnoea syndrome, preterm newborns in Intensive Care Units, daily monitoring in occupational dysphonia, speech therapy, Parkinson and Alzheimer disease, monitoring of psychiatric and autistic subjects, etc. However, a significant amount of ambient noise is inevitably included in the records.Especially in the case of recordings that take a long time, manual extraction of clinically useful information from a whole record is a time-consuming operator-dependent task, the length of a whole recording (even several hours) being prohibitive both for perceptual analysis made by listening to it and for visual inspection of signal patterns. Moreover, objective measures of signal characteristics may serve clinicians as a common ground for diagnosis. Hence, automatic methods are needed to speed up and objectify the analysis task.The present work describes a new, automatic, fast and reliable method for extracting “voiced candidates” from audio recordings of long duration for both clinical and home applications.To demonstrate its effectiveness, the method is compared to existing software tools commonly used in biomedical applications using synthetic signals.  相似文献   

13.
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects'' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance''s difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production.  相似文献   

14.
ObjectivesPrevious studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users.DesignSpeech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated.ResultsSRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type.ConclusionsCI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit.  相似文献   

15.
BackgroundAuditory neuropathy (AN) is a recently recognized hearing disorder characterized by intact outer hair cell function, disrupted auditory nerve synchronization and poor speech perception and recognition. Cochlear implants (CIs) are currently the most promising intervention for improving hearing and speech in individuals with AN. Although previous studies have shown optimistic results, there was large variability concerning benefits of CIs among individuals with AN. The data indicate that different criteria are needed to evaluate the benefit of CIs in these children compared to those with sensorineural hearing loss. We hypothesized that a hierarchic assessment would be more appropriate to evaluate the benefits of cochlear implantation in AN individuals.MethodsEight prelingual children with AN who received unilateral CIs were included in this study. Hearing sensitivity and speech recognition were evaluated pre- and postoperatively within each subject. The efficacy of cochlear implantation was assessed using a stepwise hierarchic evaluation for achieving: (1) effective audibility, (2) improved speech recognition, (3) effective speech, and (4) effective communication.ResultsThe postoperative hearing and speech performance varied among the subjects. According to the hierarchic assessment, all eight subjects approached the primary level of effective audibility, with an average implanted hearing threshold of 43.8 ± 10.2 dB HL. Five subjects (62.5%) attained the level of improved speech recognition, one (12.5%) reached the level of effective speech, and none of the subjects (0.0%) achieved effective communication.ConclusionCIs benefit prelingual children with AN to varying extents. A hierarchic evaluation provides a more suitable method to determine the benefits that AN individuals will likely receive from cochlear implantation.  相似文献   

16.
The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used:− pure tone audiometry with Békésy technique,− transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise;− psychoacoustical modulation transfer function,− forward masking,− speech recognition in noise,− tinnitus matching.A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group.  相似文献   

17.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

18.
McDermott HJ 《PloS one》2011,6(7):e22358

Background

Recently two major manufacturers of hearing aids introduced two distinct frequency-lowering techniques that were designed to compensate in part for the perceptual effects of high-frequency hearing impairments. The Widex “Audibility Extender” is a linear frequency transposition scheme, whereas the Phonak “SoundRecover” scheme employs nonlinear frequency compression. Although these schemes process sound signals in very different ways, studies investigating their use by both adults and children with hearing impairment have reported significant perceptual benefits. However, the modifications that these innovative schemes apply to sound signals have not previously been described or compared in detail.

Methods

The main aim of the present study was to analyze these schemes''technical performance by measuring outputs from each type of hearing aid with the frequency-lowering functions enabled and disabled. The input signals included sinusoids, flute sounds, and speech material. Spectral analyses were carried out on the output signals produced by the hearing aids in each condition.

Conclusions

The results of the analyses confirmed that each scheme was effective at lowering certain high-frequency acoustic signals, although both techniques also distorted some signals. Most importantly, the application of either frequency-lowering scheme would be expected to improve the audibility of many sounds having salient high-frequency components. Nevertheless, considerably different perceptual effects would be expected from these schemes, even when each hearing aid is fitted in accordance with the same audiometric configuration of hearing impairment. In general, these findings reinforce the need for appropriate selection and fitting of sound-processing schemes in modern hearing aids to suit the characteristics and preferences of individual listeners.  相似文献   

19.
The activation of listener''s motor system during speech processing was first demonstrated by the enhancement of electromyographic tongue potentials as evoked by single-pulse transcranial magnetic stimulation (TMS) over tongue motor cortex. This technique is, however, technically challenging and enables only a rather coarse measurement of this motor mirroring. Here, we applied TMS to listeners’ tongue motor area in association with ultrasound tissue Doppler imaging to describe fine-grained tongue kinematic synergies evoked by passive listening to speech. Subjects listened to syllables requiring different patterns of dorso-ventral and antero-posterior movements (/ki/, /ko/, /ti/, /to/). Results show that passive listening to speech sounds evokes a pattern of motor synergies mirroring those occurring during speech production. Moreover, mirror motor synergies were more evident in those subjects showing good performances in discriminating speech in noise demonstrating a role of the speech-related mirror system in feed-forward processing the speaker''s ongoing motor plan.  相似文献   

20.
Perception of complex sound is a process carried out in everyday life situations and contributes in the way one perceives reality. Attempting to explain sound perception and how it affects human beings is complicated. Physics of simple sound can be described as a function of frequency, amplitude and phase. Psychology of sound, also termed psychoacoustics, has its own distinct elements of pitch, intensity and tibre. An interconnection exists between physics and psychology of hearing.Music being a complex sound contributes to communication and conveys information with semantic and emotional elements. These elements indicate the involvement of the central nervous system through processes of integration and interpretation together with peripheral auditory processing.Effects of sound and music in human psychology and physiology are complicated. Psychological influences of listening to different types of music are based on the different characteristics of basic musical sounds. Attempting to explain music perception can be simpler if music is broken down to its basic auditory signals. Perception of auditory signals is analyzed by the science of psychoacoustics. Differences in complex sound perception have been found between normal subjects and psychiatric patients and between different types of psychopathologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号