首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test–retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information.  相似文献   

2.
For the perception of timbre of a musical instrument, the attack time is known to hold crucial information. The first 50 to 150 ms of sound onset reflect the excitation mechanism, which generates the sound. Since auditory processing and music perception in particular are known to be hampered in cochlear implant (CI) users, we conducted an electroencephalography (EEG) study with an oddball paradigm to evaluate the processing of small differences in musical sound onset. The first 60 ms of a cornet sound were manipulated in order to examine whether these differences are detected by CI users and normal-hearing controls (NH controls), as revealed by auditory evoked potentials (AEPs). Our analysis focused on the N1 as an exogenous component known to reflect physical stimuli properties as well as on the P2 and the Mismatch Negativity (MMN). Our results revealed different N1 latencies as well as P2 amplitudes and latencies for the onset manipulations in both groups. An MMN could be elicited only in the NH control group. Together with additional findings that suggest an impact of musical training on CI users’ AEPs, our findings support the view that impaired timbre perception in CI users is at partly due to altered sound onset feature detection.  相似文献   

3.
ObjectivesPrevious studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users.DesignSpeech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated.ResultsSRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type.ConclusionsCI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit.  相似文献   

4.

Objective

To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users.

Methods

Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed.

Results

Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032).

Conclusion

Webcameras have the potential to improve telecommunication of hearing-impaired individuals.  相似文献   

5.

Objective

To investigate the performance of monaural and binaural beamforming technology with an additional noise reduction algorithm, in cochlear implant recipients.

Method

This experimental study was conducted as a single subject repeated measures design within a large German cochlear implant centre. Twelve experienced users of an Advanced Bionics HiRes90K or CII implant with a Harmony speech processor were enrolled. The cochlear implant processor of each subject was connected to one of two bilaterally placed state-of-the-art hearing aids (Phonak Ambra) providing three alternative directional processing options: an omnidirectional setting, an adaptive monaural beamformer, and a binaural beamformer. A further noise reduction algorithm (ClearVoice) was applied to the signal on the cochlear implant processor itself. The speech signal was presented from 0° and speech shaped noise presented from loudspeakers placed at ±70°, ±135° and 180°. The Oldenburg sentence test was used to determine the signal-to-noise ratio at which subjects scored 50% correct.

Results

Both the adaptive and binaural beamformer were significantly better than the omnidirectional condition (5.3 dB±1.2 dB and 7.1 dB±1.6 dB (p<0.001) respectively). The best score was achieved with the binaural beamformer in combination with the ClearVoice noise reduction algorithm, with a significant improvement in SRT of 7.9 dB±2.4 dB (p<0.001) over the omnidirectional alone condition.

Conclusions

The study showed that the binaural beamformer implemented in the Phonak Ambra hearing aid could be used in conjunction with a Harmony speech processor to produce substantial average improvements in SRT of 7.1 dB. The monaural, adaptive beamformer provided an averaged SRT improvement of 5.3 dB.  相似文献   

6.
The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the cortical level in speech perception.  相似文献   

7.
Prelingually deafened children with cochlear implants stand a good chance of developing satisfactory speech performance. Nevertheless, their eventual language performance is highly variable and not fully explainable by the duration of deafness and hearing experience. In this study, two groups of cochlear implant users (CI groups) with very good basic hearing abilities but non-overlapping speech performance (very good or very bad speech performance) were matched according to hearing age and age at implantation. We assessed whether these CI groups differed with regard to their phoneme discrimination ability and auditory sensory memory capacity, as suggested by earlier studies. These functions were measured behaviorally and with the Mismatch Negativity (MMN). Phoneme discrimination ability was comparable in the CI group of good performers and matched healthy controls, which were both better than the bad performers. Source analyses revealed larger MMN activity (155–225 ms) in good than in bad performers, which was generated in the frontal cortex and positively correlated with measures of working memory. For the bad performers, this was followed by an increased activation of left temporal regions from 225 to 250 ms with a focus on the auditory cortex. These results indicate that the two CI groups developed different auditory speech processing strategies and stress the role of phonological functions of auditory sensory memory and the prefrontal cortex in positively developing speech perception and production.  相似文献   

8.
A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.  相似文献   

9.
Nucleus cochlear implant systems incorporate a fast-acting front-end automatic gain control (AGC), sometimes called a compression limiter. The objective of the present study was to determine the effect of replacing the front-end compression limiter with a newly proposed envelope profile limiter. A secondary objective was to investigate the effect of AGC speed on cochlear implant speech intelligibility. The envelope profile limiter was located after the filter bank and reduced the gain when the largest of the filter bank envelopes exceeded the compression threshold. The compression threshold was set equal to the saturation level of the loudness growth function (i.e. the envelope level that mapped to the maximum comfortable current level), ensuring that no envelope clipping occurred. To preserve the spectral profile, the same gain was applied to all channels. Experiment 1 compared sentence recognition with the front-end limiter and with the envelope profile limiter, each with two release times (75 and 625 ms). Six implant recipients were tested in quiet and in four-talker babble noise, at a high presentation level of 89 dB SPL. Overall, release time had a larger effect than the AGC type. With both AGC types, speech intelligibility was lower for the 75 ms release time than for the 625 ms release time. With the shorter release time, the envelope profile limiter provided higher group mean scores than the front-end limiter in quiet, but there was no significant difference in noise. Experiment 2 measured sentence recognition in noise as a function of presentation level, from 55 to 89 dB SPL. The envelope profile limiter with 625 ms release time yielded better scores than the front-end limiter with 75 ms release time. A take-home study showed no clear pattern of preferences. It is concluded that the envelope profile limiter is a feasible alternative to a front-end compression limiter.  相似文献   

10.
Two questions remain virtually unexplored in the problem of the significance of speech for perception: the significance of speech for perception and reproduction of individual aspects of a complex entity (the number of elements of which it is comprised, their color and disposition), and the features of the connection between words and these elements. The latter question requires some explanation. There are objects whose names we employ very frequently in conversation (table, chair, etc.). There is a particularly close relationship between the visual image of such objects and the words. But at the same time, there are quite a number of objects (certain types of uncommon colors, birds, details of instruments, etc.) the names of which many people do not know. Further, certain details have no special names at all (for example, particular details of ornaments). A. G. Ivanov-Smolenskii, in his article "The Interaction of the First and Second Signal Systems Under Certain Physiological and Pathological Conditions" [O vzaimodeistvii pervoi i vtoroi signal'nykh sistem pri nekotorykh fiziologicheskikh i patologicheskikh usloviiakh], The Physiological Journal, USSR Academy of Sciences [Fiziologicheskii zhurnal AN SSSR], 1949, No. 5, wrote: "Some individually distinct part of experience is always found — for a while — to be untransmitted to the second signal system, and not yet subject to verbal interpretation and verbal formulation ('unverbalized')."  相似文献   

11.
Contralateral masking is the phenomenon where a masker presented to one ear affects the ability to detect a signal in the opposite ear. For normal hearing listeners, contralateral masking results in masking patterns that are both sharper and dramatically smaller in magnitude than ipsilateral masking. The goal of this study was to investigate whether medial olivocochlear (MOC) efferents are needed for the sharpness and relatively small magnitude of the contralateral masking function. To do this, bilateral cochlear implant patients were tested because, by directly stimulating the auditory nerve, cochlear implants circumvent the effects of the MOC efferents. The results indicated that, as with normal hearing listeners, the contralateral masking function was sharper than the ipsilateral masking function. However, although there was a reduction in the magnitude of the contralateral masking function compared to the ipsilateral masking function, it was relatively modest. This is in sharp contrast to the results of normal hearing listeners where the magnitude of the contralateral masking function is greatly reduced. These results suggest that MOC function may not play a large role in the sharpness of the contralateral masking function but may play a considerable role in the magnitude of the contralateral masking function.  相似文献   

12.
When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.  相似文献   

13.
Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.  相似文献   

14.
Artificial wombs are already in development that have the potential to radically alter how we perceive the developing fetus and the role of pregnancy in society. That this technology would allow greater visibility of gestation than ever before also highlights the risk that artificial wombs will be used to further restrict women’s reproductive liberty and access to abortion. This article uses Paul Lauritzen’s theory of “visual bioethics” to explore the ethical significance of images of the developing fetus and how artificial wombs might best be visually designed and integrated into society.  相似文献   

15.
Individual differences in second language (L2) phoneme perception (within the normal population) have been related to speech perception abilities, also observed in the native language, in studies assessing the electrophysiological response mismatch negativity (MMN). Here, we investigate the brain oscillatory dynamics in the theta band, the spectral correlate of the MMN, that underpin success in phoneme learning. Using previous data obtained in an MMN paradigm, the dynamics of cortical oscillations while perceiving native and unknown phonemes and nonlinguistic stimuli were studied in two groups of participants classified as good and poor perceivers (GPs and PPs), according to their L2 phoneme discrimination abilities. The results showed that for GPs, as compared to PPs, processing of a native phoneme change produced a significant increase in theta power. Stimulus time-locked analysis event-related spectral perturbation (ERSP) showed differences for the theta band within the MMN time window (between 70 and 240 ms) for the native deviant phoneme. No other significant difference between the two groups was observed for the other phoneme or nonlinguistic stimuli. The dynamic patterns in the theta-band may reflect early automatic change detection for familiar speech sounds in the brain. The behavioral differences between the two groups may reflect individual variations in activating brain circuits at a perceptual level.  相似文献   

16.
17.
The problem of visual and esthetic training has many facets, among which the psychological are obviously not the least important. The results of long years of studying the psychology of the child (2) lead to the conclusion that:

Human consciousness and the mechanisms of conscious control of human actions represe it a hierarchical system, like a building constructed of many superimposed psychophysiological levels, of many floors rising above one another. Somewhere in the basement are the prepsychological instinctive processes of receiving and using unconditioned reflex signals; the lower floors form the elementary sensations and implement the elementary types of individually discovered sensorimotor coordinations; the next floor is where the synthetic perceptions of space and time are formed along with the mechanisms for controlling locomotion and object manipulation; above them is the floor housing visual thinking and the processes for regulating play and productive activities; and, finally, above all these is the floor of symbolic, abstract, logical operations and the most complex instances of control exerted by the most complex information- seeking and labor activities. It should be emphasized that in its developed form this hierarchical system operates as a single entity, and the control of complex operations requires agreement between the operations of the psychophysiological mechanisms located at all these levels or floors. Each age level in the child's development erects the next floor of the overall psychophysiological building, and our task is mainly to build it in the best possible way, without senseless haste and taking heed not to erect the next floor before the one below is finished. (4)  相似文献   

18.
19.
Tinnitus is the perception of sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. In experiment 1, we used a go/no-go paradigm to evaluate the target detection speed and the inhibitory control in tinnitus participants (TP) and control subjects (CS), both in unimodal and bimodal conditions in the auditory and visual modalities. We also tested whether the sound frequency used for target and distractors affected the performance. We observed that TP were slower and made more false alarms than CS in all unimodal auditory conditions. TP were also slower than CS in the bimodal conditions. In addition, when comparing the response times in bimodal and auditory unimodal conditions, the expected gain in bimodal conditions was present in CS, but not in TP when tinnitus-matched frequency sounds were used as targets. In experiment 2, we tested the sensitivity to cross-modal interference in TP during auditory and visual go/no-go tasks where each stimulus was preceded by an irrelevant pre-stimulus in the untested modality (e.g. high frequency auditory pre-stimulus in visual no/no-go condition). We observed that TP had longer response times than CS and made more false alarms in all conditions. In addition, the highest false alarm rate occurred in TP when tinnitus-matched/high frequency sounds were used as pre-stimulus. We conclude that the inhibitory control is altered in TP and that TP are abnormally sensitive to cross-modal interference, reflecting difficulties to ignore irrelevant stimuli. The fact that the strongest interference effect was caused by tinnitus-like auditory stimulation is consistent with the hypothesis according to which such stimulations generate emotional responses that affect cognitive processing in TP. We postulate that executive functions deficits play a key-role in the perception and maintenance of tinnitus.  相似文献   

20.

Purpose

To evaluate the clinical features and investigate their relationship with visual function in Japanese patients with acute zonal occult outer retinopathy (AZOOR).

Methods

Fifty-two eyes of 38 Japanese AZOOR patients (31 female and 7 male patients; mean age at first visit, 35.0 years; median follow-up duration, 31 months) were retrospectively collected: 31 untreated eyes with good visual acuity and 21 systemic corticosteroid-treated eyes with progressive visual acuity loss. Variables affecting the logMAR values of best-corrected visual acuity (BCVA) and the mean deviation (MD) on Humphrey perimetry at initial and final visits were examined using multiple stepwise linear regression analysis.

Results

In untreated eyes, the mean MD at the final visit was significantly higher than that at the initial visit (P = 0.00002). In corticosteroid-treated eyes, the logMAR BCVA and MD at the final visit were significantly better than the initial values (P = 0.007 and P = 0.02, respectively). The final logMAR BCVA was 0.0 or less in 85% of patients. Variables affecting initial visual function were moderate anterior vitreous cells, myopia severity, and a-wave amplitudes on electroretinography; factors affecting final visual function were the initial MD values, female sex, moderate anterior vitreous cells, and retinal atrophy.

Conclusions

Our data indicated that visual functions in enrolled patients significantly improved spontaneously or after systemic corticosteroids therapy, suggesting that Japanese patients with AZOOR have good visual outcomes during the follow-up period of this study. Furthermore, initial visual field defects, gender, anterior vitreous cells, and retinal atrophy affected final visual functions in these patients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号