首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Klinge A  Beutelmann R  Klump GM 《PloS one》2011,6(10):e26124
The amount of masking of sounds from one source (signals) by sounds from a competing source (maskers) heavily depends on the sound characteristics of the masker and the signal and on their relative spatial location. Numerous studies investigated the ability to detect a signal in a speech or a noise masker or the effect of spatial separation of signal and masker on the amount of masking, but there is a lack of studies investigating the combined effects of many cues on the masking as is typical for natural listening situations. The current study using free-field listening systematically evaluates the combined effects of harmonicity and inharmonicity cues in multi-tone maskers and cues resulting from spatial separation of target signal and masker on the detection of a pure tone in a multi-tone or a noise masker. A linear binaural processing model was implemented to predict the masked thresholds in order to estimate whether the observed thresholds can be accounted for by energetic masking in the auditory periphery or whether other effects are involved. Thresholds were determined for combinations of two target frequencies (1 and 8 kHz), two spatial configurations (masker and target either co-located or spatially separated by 90 degrees azimuth), and five different masker types (four complex multi-tone stimuli, one noise masker). A spatial separation of target and masker resulted in a release from masking for all masker types. The amount of masking significantly depended on the masker type and frequency range. The various harmonic and inharmonic relations between target and masker or between components of the masker resulted in a complex pattern of increased or decreased masked thresholds in comparison to the predicted energetic masking. The results indicate that harmonicity cues affect the detectability of a tonal target in a complex masker.  相似文献   

2.
The goal of the study was to enlarge knowledge of discrimination of complex sound signals by the auditory system in masking noise. For that, influence of masking noise on detection of shift of rippled spectrum was studied in normal listeners. The signal was a shift of ripple phase within a 0.5-oct wide rippled spectrum centered at 2 kHz. The ripples were frequency-proportional (throughout the band, ripple spacing was a constant proportion of the ripple center frequency). Simultaneous masker was a 0.5-oct noise below-, on-, or above the signal band. Both the low-frequency (center frequency 1 kHz) and on-frequency (the same center frequency as for the signal) maskers increased the thresholds for detecting ripple phase shift. However, the threshold dependence on the masker level was different for these two maskers. For the on-frequency masker, the masking effect primarily depended on the masker/signal ratio: the threshold steeply increased at a ratio of 5 dB, and no shift was detectable at a ratio of 10 dB. For the low-frequency masker, the masking effect primarily depended on the masker level: the threshold increased at a masker level of 80 dB SPL, and no shift was detectable at a masker level of 90 dB (for a signal level of 50 dB) or 100 dB (for a signal level of 80 dB). The high-frequency masker had little effect. The data were successfully simulated using an excitation-pattern model. In this model, the effect of the on-frequency masker appeared to be primarily due to a decrease of ripple depth. The effect of the low-frequency masker appeared due to widening of the auditory filters at high sound levels.  相似文献   

3.
The detection of a change in the modulation pattern of a (target) carrier frequency, fc (for example a change in the depth of amplitude or frequency modulation, AM or FM) can be adversely affected by the presence of other modulated sounds (maskers) at frequencies remote from fc, an effect called modulation discrimination interference (MDI). MDI cannot be explained in terms of interaction of the sounds in the peripheral auditory system. It may result partly from a tendency for sounds which are modulated in a similar way to be perceptually 'grouped', i.e. heard as a single sound. To test this idea, MDI for the detection of a change in AM depth was measured as a function of stimulus variables known to affect perceptual grouping, namely overall duration and onset and offset asynchrony between the masking and target sounds. In parallel experiments, subjects were presented with a series of pairs of sounds, the target alone and the target with maskers, and were asked to rate how clearly the modulation of the target could be heard in the complex mixture. The results suggest that two factors contribute to MDI. One factor is difficulty in hearing a pitch corresponding to the target frequency. This factor appears to be strongly affected by perceptual grouping. Its effects can be reduced or abolished by asynchronous gating of the target and masker. The second factor is a specific difficulty in hearing the modulation of the target, or in distinguishing that modulation from the modulation of other sounds that are present. This factor has effects even under conditions promoting perceptual segregation of the target and masker.  相似文献   

4.
For a gleaning bat hunting prey from the ground, rustling sounds generated by prey movements are essential to invoke a hunting behaviour. The detection of prey-generated rustling sounds may depend heavily on the time structure of the prey-generated and the masking sounds due to their spectral similarity. Here, we systematically investigate the effect of the temporal structure on psychophysical rustling-sound detection in the gleaning bat, Megaderma lyra. A recorded rustling sound serves as the signal; the maskers are either Gaussian noise or broadband noise with various degrees of envelope fluctuations. Exploratory experiments indicate that the selective manipulation of the temporal structure of the rustling sound does not influence its detection in a Gaussian-noise masker. The results of the main experiment show, however, that the temporal structure of the masker has a strong and systematic effect on rustling-sound detection: When the width of irregularly spaced gaps in the masker exceeded about 0.3 ms, rustling-sound detection improved monotonically with increasing gap duration. Computer simulations of this experiment reveal that a combined detection strategy of spectral and temporal analysis underlies rustling-sound detection with fluctuating masking sounds.  相似文献   

5.
This paper reviews the basic aspects of auditory processing that play a role in the perception of speech. The frequency selectivity of the auditory system, as measured using masking experiments, is described and used to derive the internal representation of the spectrum (the excitation pattern) of speech sounds. The perception of timbre and distinctions in quality between vowels are related to both static and dynamic aspects of the spectra of sounds. The perception of pitch and its role in speech perception are described. Measures of the temporal resolution of the auditory system are described and a model of temporal resolution based on a sliding temporal integrator is outlined. The combined effects of frequency and temporal resolution can be modelled by calculation of the spectro-temporal excitation pattern, which gives good insight into the internal representation of speech sounds. For speech presented in quiet, the resolution of the auditory system in frequency and time usually markedly exceeds the resolution necessary for the identification or discrimination of speech sounds, which partly accounts for the robust nature of speech perception. However, for people with impaired hearing, speech perception is often much less robust.  相似文献   

6.
Temporal cues are important for some forms of auditory processing, such as echolocation. Among odontocetes (toothed whales, dolphins, and porpoises), it has been suggested that porpoises may have temporal processing abilities which differ from other odontocetes because of their relatively narrow auditory filters and longer duration echolocation signals. This study examined auditory temporal resolution in two Yangtze finless porpoises (Neophocaena phocaenoides asiaeorientalis) using auditory evoked potentials (AEPs) to measure: (a) rate following responses and modulation rate transfer function for 100 kHz centered pulse sounds and (b) hearing thresholds and response amplitudes generated by individual pulses of different durations. The animals followed pulses well at modulation rates up to 1,250 Hz, after which response amplitudes declined until extinguished beyond 2,500 Hz. The subjects had significantly better hearing thresholds for longer, narrower-band pulses similar to porpoise echolocation signals compared to brief, broadband sounds resembling dolphin clicks. Results indicate that the Yangtze finless porpoise follows individual acoustic signals at rates similar to other odontocetes tested. Relatively good sensitivity for longer duration, narrow-band signals suggests that finless porpoise hearing is well suited to detect their unique echolocation signals.  相似文献   

7.
Vélez A  Bee MA 《Animal behaviour》2011,(6):1319-1327
Dip listening refers to our ability to catch brief "acoustic glimpses" of speech and other sounds when fluctuating background noise levels momentarily decrease. Exploiting dips in natural fluctuations of noise contributes to our ability to overcome the "cocktail party problem" of understanding speech in multi-talker social environments. We presently know little about how nonhuman animals solve analogous communication problems. Here, we asked whether female grey treefrogs (Hyla chrysoscelis) might benefit from dip listening in selecting a mate in the noisy social setting of a breeding chorus. Consistent with a dip listening hypothesis, subjects recognized conspecific calls at lower thresholds when the dips in a chorus-like noise masker were long enough to allow glimpses of nine or more consecutive pulses. No benefits of dip listening were observed when dips were shorter and included five or fewer pulses. Recognition thresholds were higher when the noise fluctuated at a rate similar to the pulse rate of the call. In a second experiment, advertisement calls comprising six to nine pulses were necessary to elicit responses under quiet conditions. Together, these results suggest that in frogs, the benefits of dip listening are constrained by neural mechanisms underlying temporal pattern recognition. These constraints have important implications for the evolution of male signalling strategies in noisy social environments.  相似文献   

8.
The auditory system creates a neuronal representation of the acoustic world based on spectral and temporal cues present at the listener''s ears, including cues that potentially signal the locations of sounds. Discrimination of concurrent sounds from multiple sources is especially challenging. The current study is part of an effort to better understand the neuronal mechanisms governing this process, which has been termed “auditory scene analysis”. In particular, we are interested in spatial release from masking by which spatial cues can segregate signals from other competing sounds, thereby overcoming the tendency of overlapping spectra and/or common temporal envelopes to fuse signals with maskers. We studied detection of pulsed tones in free-field conditions in the presence of concurrent multi-tone non-speech maskers. In “energetic” masking conditions, in which the frequencies of maskers fell within the ±1/3-octave band containing the signal, spatial release from masking at low frequencies (∼600 Hz) was found to be about 10 dB. In contrast, negligible spatial release from energetic masking was seen at high frequencies (∼4000 Hz). We observed robust spatial release from masking in broadband “informational” masking conditions, in which listeners could confuse signal with masker even though there was no spectral overlap. Substantial spatial release was observed in conditions in which the onsets of the signal and all masker components were synchronized, and spatial release was even greater under asynchronous conditions. Spatial cues limited to high frequencies (>1500 Hz), which could have included interaural level differences and the better-ear effect, produced only limited improvement in signal detection. Substantially greater improvement was seen for low-frequency sounds, for which interaural time differences are the dominant spatial cue.  相似文献   

9.
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.  相似文献   

10.
Auditory evoked potentials (AEP) were used to measure the hearing range and auditory sensitivity of the American sand lance Ammodytes americanus. Responses to amplitude‐modulated tone pips indicated that the hearing range extended from 50 to 400 Hz. Sound pressure thresholds were lowest between 200 and 400 Hz. Particle acceleration thresholds showed an improved sensitivity notch at 200 Hz but not substantial differences between frequencies and only a slight improvement in hearing abilities at lower frequencies. The hearing range was similar to Pacific sand lance Ammodytes personatus and variations between species may be due to differences in threshold evaluation methods. AEPs were also recorded in response to pulsed sounds simulating humpback whale Megaptera novaeangliae foraging vocalizations termed megapclicks. Responses were generated with pulses containing significant energy below 400 Hz. No responses were recorded using pulses with peak energy above 400 Hz. These results show that A. americanus can detect the particle motion component of low‐frequency tones and pulse sounds, including those similar to the low‐frequency components of megapclicks. Ammodytes americanus hearing may be used to detect environmental cues and the pulsed signals of mysticete predators.  相似文献   

11.
In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments.  相似文献   

12.
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners.  相似文献   

13.
The aims of this study were (1) to document the recognition performance of environmental sounds (ESs) in Mandarin-speaking children with cochlear implants (CIs) and to analyze the possible associated factors with the ESs recognition; (2) to examine the relationship between perception of ESs and receptive vocabulary level; and (3) to explore the acoustic factors relevant to perceptual outcomes of daily ESs in pediatric CI users. Forty-seven prelingually deafened children between ages 4 to 10 years participated in this study. They were divided into pre-school (group A: age 4–6) and school-age (group B: age 7 to 10) groups. Sound Effects Recognition Test (SERT) and the Chinese version of the revised Peabody Picture Vocabulary Test (PPVT-R) were used to assess the auditory perception ability. The average correct percentage of SERT was 61.2% in the preschool group and 72.3% in the older group. There was no significant difference between the two groups. The ESs recognition performance of children with CIs was poorer than that of their hearing peers (90% in average). No correlation existed between ESs recognition and receptive vocabulary comprehension. Two predictive factors: pre-implantation residual hearing and duration of CI usage were found to be associated with recognition performance of daily-encountered ESs. Acoustically, sounds with distinct temporal patterning were easier to identify for children with CIs. In conclusion, we have demonstrated that ESs recognition is not easy for children with CIs and a low correlation existed between linguistic sounds and ESs recognition in these subjects. Recognition ability of ESs in children with CIs can only be achieved by natural exposure to daily-encountered auditory stimuli if sounds other than speech stimuli were less emphasized in routine verbal/oral habilitation program. Therefore, task-specific measures other than speech materials can be helpful to capture the full profile of auditory perceptual progress after implantation.  相似文献   

14.
For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2) information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects’ HA-aided pure-tone average (PTA) thresholds between 250 and 2000 Hz; subjects were divided into two groups: “better” PTA (<50 dB HL) or “poorer” PTA (>50 dB HL). The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12), further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception.  相似文献   

15.
The parasitoid tachinid fly Homotrixa alleni detects its hosts by their acoustic signals. The tympanal organ of the fly is located at the prothorax and contains scolopidial sensory units of different size and orientation. The tympanal membrane vibrates in the frequency range of approximately 4–35 kHz, which is also reflected in the hearing threshold measured at the neck connective. The auditory organ is not tuned to the peak frequency (5 kHz) of the main host, the bush cricket Sciarasaga quadrata. Auditory afferents project in the three thoracic neuromeres. Most of the ascending interneurons branch in all thoracic neuromeres and terminate in the deutocerebrum of the brain. The interneurons do not differ considerably in frequency tuning, but in their sensitivity with lowest thresholds around 30 dB SPL. Suprathreshold responses of most neurons depend on frequency and intensity, indicating inhibitory influence at higher intensities. Some neurons respond particularly well at low frequency sounds (around 5 kHz) and high intensities (80–90 dB SPL), and thus may be involved in detection of the primary host, S. quadrata. The auditory system of H. alleni contains auditory interneurons reacting in a wide range of temporal patterns from strictly phasic to tonic and with clear differences in frequency responses.  相似文献   

16.
The objective was to determine if one of the neural temporal features, neural adaptation, can account for the across-subject variability in behavioral measures of temporal processing and speech perception performance in cochlear implant (CI) recipients. Neural adaptation is the phenomenon in which neural responses are the strongest at the beginning of the stimulus and decline following stimulus repetition (e.g., stimulus trains). It is unclear how this temporal property of neural responses relates to psychophysical measures of temporal processing (e.g., gap detection) or speech perception. The adaptation of the electrical compound action potential (ECAP) was obtained using 1000 pulses per second (pps) biphasic pulse trains presented directly to the electrode. The adaptation of the late auditory evoked potential (LAEP) was obtained using a sequence of 1-kHz tone bursts presented acoustically, through the cochlear implant. Behavioral temporal processing was measured using the Random Gap Detection Test at the most comfortable listening level. Consonant nucleus consonant (CNC) word and AzBio sentences were also tested. The results showed that both ECAP and LAEP display adaptive patterns, with a substantial across-subject variability in the amount of adaptation. No correlations between the amount of neural adaptation and gap detection thresholds (GDTs) or speech perception scores were found. The correlations between the degree of neural adaptation and demographic factors showed that CI users having more LAEP adaptation were likely to be those implanted at a younger age than CI users with less LAEP adaptation. The results suggested that neural adaptation, at least this feature alone, cannot account for the across-subject variability in temporal processing ability in the CI users. However, the finding that the LAEP adaptive pattern was less prominent in the CI group compared to the normal hearing group may suggest the important role of normal adaptation pattern at the cortical level in speech perception.  相似文献   

17.
Sounds were produced by the topmouth minnow Pseudorasbora parva , a common Eurasian cyprinid, during feeding but not during intraspecific interactions. Feeding sounds were short broadband pulses with main energies between 100 and 800 Hz. They varied in their characteristics (number of single sounds per feeding sequence, sound duration and period, and sound pressure level) depending on the food type (chironomid larvae, Tubifex worms and flake food). The loudest sounds were emitted when food was taken up at the water surface, most probably reflecting 'suctorial' feeding. Auditory sensitivities were determined between 100 and 4000 Hz utilizing the auditory evoked potentials recording technique. Under laboratory conditions and in the presence of natural ambient noise recorded in Lake Neusiedl in eastern Austria, best hearing sensitivities were between 300 and 800 Hz (57 dB re 1 μPa v . 72 dB in the presence of ambient noise). Threshold-to-noise ratios were positively correlated to the sound frequency. The correlation between sound spectra and auditory thresholds revealed that P. parva can detect conspecific sounds up to 40 cm distance under ambient noise conditions. Thus, feeding sounds could serve as an auditory cue for the presence of food during foraging.  相似文献   

18.
Accurate pitch perception of harmonic complex tones is widely believed to rely on temporal fine structure information conveyed by the precise phase-locked responses of auditory-nerve fibers. However, accurate pitch perception remains possible even when spectrally resolved harmonics are presented at frequencies beyond the putative limits of neural phase locking, and it is unclear whether residual temporal information, or a coarser rate-place code, underlies this ability. We addressed this question by measuring human pitch discrimination at low and high frequencies for harmonic complex tones, presented either in isolation or in the presence of concurrent complex-tone maskers. We found that concurrent complex-tone maskers impaired performance at both low and high frequencies, although the impairment introduced by adding maskers at high frequencies relative to low frequencies differed between the tested masker types. We then combined simulated auditory-nerve responses to our stimuli with ideal-observer analysis to quantify the extent to which performance was limited by peripheral factors. We found that the worsening of both frequency discrimination and F0 discrimination at high frequencies could be well accounted for (in relative terms) by optimal decoding of all available information at the level of the auditory nerve. A Python package is provided to reproduce these results, and to simulate responses to acoustic stimuli from the three previously published models of the human auditory nerve used in our analyses.  相似文献   

19.
Babushina ES 《Biofizika》1999,44(6):1101-1108
The interaction of complex sounds with the body tissues of Black Sea dolphin (Tursiops truncatus) was studied by the method of instrumental conditioned reflexes with food reinforcement. The thresholds of detecting underwater acoustic signals of different frequencies for dolphin and northern fur seal (Callorhinus ursinus) were measured as a function of pulse duration under conditions of full and partial (head above water) submergence of animals into water. It was found that sound conduction through dolphin tissues was more effective than that in a northern fur seal in a wide frequency range. Presumably, the process of sound propagation in dolphin is accompanied by changes in the amplitude-frequency structure of broad-band sounds. The temporal summation in dolphin hearing was observed at all frequencies under conditions of full and partial submergence, whereas in northern fur seal it was nearly absent at a frequency of 5 kHz under the conditions of head lifting above water.  相似文献   

20.
When two tones are presented in a short time interval, the response to the second tone is suppressed. This phenomenon is referred to as forward suppression. To address the effect of the masker laterality on forward suppression, magnetoencephalographic responses were investigated for eight subjects with normal hearing when the preceding maskers were presented ipsilaterally, contralaterally, and binaurally. We employed three masker intensity conditions: the ipsilateral-strong, left-right-balanced, and contralateral-strong conditions. Regarding the responses to the maskers without signal, the N1m amplitude evoked by the left and binaural maskers was significantly larger than that evoked by the right masker for the left-strong and left-right-balanced conditions. No significant difference was observed for the right-strong condition. Regarding the subsequent N1m amplitudes, they were attenuated by the presence of the left, binaural, and right maskers for all conditions. For the left- and right-strong conditions, the subsequent N1m amplitude in the presence of the left masker was smaller than those of the binaural and right maskers. No difference was observed between the binaural and right masker presentation. For left-right-balanced condition, the subsequent N1m amplitude decreased in the presence of the right, binaural, and left maskers in that order. If the preceding activity reflected the ability to suppress the subsequent activity, the forward suppression by the left masker would be superior to that by the right masker for the left-strong and left-right-balanced conditions. Furthermore, the forward suppression by the binaural masker would be expected to be superior to that by the left masker owing to additional afferent activity from the right ear. Thus, the current results suggest that the forward suppression by ipsilateral maskers is superior to that by contralateral maskers although both maskers evoked the N1m amplitudes to the same degree. Additional masker at the contralateral ear can attenuate the forward suppression by the ipsilateral masker.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号