首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Temporal summation was estimated by measuring the detection thresholds for pulses with durations of 1–50 ms in the presence of noise maskers. The purpose of the study was to examine the effects of the spectral profiles and intensities of noise maskers on temporal summation, to investigate the appearance of signs of peripheral processing of pulses with various frequency-time structures in auditory responses, and to test the opportunity to use temporal summation for speech recognition. The central frequencies of pulses and maskers were similar. The maskers had ripple structures of the amplitude spectra of two types. In some maskers, the central frequencies coincided with the spectrum humps, whereas in other maskers, they coincided with spectrum dip (so-called on- and off-maskers). When the auditory system differentiated the masker humps, then the difference between the thresholds of recognition of the stimuli presented together with each of two types of maskers was not equal to zero. The assessment of temporal summation and the difference of the thresholds of pulse recognition under conditions of the presentation of the on- and off-maskers allowed us to make a conclusion on auditory sensitivity and the resolution of the spectral structure of maskers or frequency selectivity during presentation of pulses of various durations in local frequency areas. In order to estimate the effect of the dynamic properties of hearing on sensitivity and frequency selectivity, we changed the intensity of maskers. We measured temporal summation under the conditions of the presentation of on- and off-maskers of various intensities in two frequency ranges (2 and 4 kHz) in four subjects with normal hearing and one person with age-related hearing impairments who complained of a decrease in speech recognition under noise conditions. Pulses shorter than 10 ms were considered as simple models of consonant sounds, whereas tone pulses longer than 10 ms were considered as simple models of vowel sounds. In subjects with normal hearing in the range of moderate masker intensities, we observed an enhancement of temporal summation when the short pulses or consonant sounds were presented and an improvement of the resolution of the broken structure of masker spectra when the short and tone pulses, i.e., consonant and vowel sounds, were presented. We supposed that the enhancement of the summation was related to the refractoriness of the fibers of the auditory nerve. In the range of 4 kHz, the subject with age-related hearing impairments did not recognize the ripple structure of the maskers in the presence of the short pulses or consonant sounds. We supposed that these impairments were caused by abnormal synchronization of the responses of the auditory nerve fibers induced by the pulses, and this resulted in a decrease in speech recognition.  相似文献   

2.
Klinge A  Beutelmann R  Klump GM 《PloS one》2011,6(10):e26124
The amount of masking of sounds from one source (signals) by sounds from a competing source (maskers) heavily depends on the sound characteristics of the masker and the signal and on their relative spatial location. Numerous studies investigated the ability to detect a signal in a speech or a noise masker or the effect of spatial separation of signal and masker on the amount of masking, but there is a lack of studies investigating the combined effects of many cues on the masking as is typical for natural listening situations. The current study using free-field listening systematically evaluates the combined effects of harmonicity and inharmonicity cues in multi-tone maskers and cues resulting from spatial separation of target signal and masker on the detection of a pure tone in a multi-tone or a noise masker. A linear binaural processing model was implemented to predict the masked thresholds in order to estimate whether the observed thresholds can be accounted for by energetic masking in the auditory periphery or whether other effects are involved. Thresholds were determined for combinations of two target frequencies (1 and 8 kHz), two spatial configurations (masker and target either co-located or spatially separated by 90 degrees azimuth), and five different masker types (four complex multi-tone stimuli, one noise masker). A spatial separation of target and masker resulted in a release from masking for all masker types. The amount of masking significantly depended on the masker type and frequency range. The various harmonic and inharmonic relations between target and masker or between components of the masker resulted in a complex pattern of increased or decreased masked thresholds in comparison to the predicted energetic masking. The results indicate that harmonicity cues affect the detectability of a tonal target in a complex masker.  相似文献   

3.
When two tones are presented in a short time interval, the response to the second tone is suppressed. This phenomenon is referred to as forward suppression. To address the effect of the masker laterality on forward suppression, magnetoencephalographic responses were investigated for eight subjects with normal hearing when the preceding maskers were presented ipsilaterally, contralaterally, and binaurally. We employed three masker intensity conditions: the ipsilateral-strong, left-right-balanced, and contralateral-strong conditions. Regarding the responses to the maskers without signal, the N1m amplitude evoked by the left and binaural maskers was significantly larger than that evoked by the right masker for the left-strong and left-right-balanced conditions. No significant difference was observed for the right-strong condition. Regarding the subsequent N1m amplitudes, they were attenuated by the presence of the left, binaural, and right maskers for all conditions. For the left- and right-strong conditions, the subsequent N1m amplitude in the presence of the left masker was smaller than those of the binaural and right maskers. No difference was observed between the binaural and right masker presentation. For left-right-balanced condition, the subsequent N1m amplitude decreased in the presence of the right, binaural, and left maskers in that order. If the preceding activity reflected the ability to suppress the subsequent activity, the forward suppression by the left masker would be superior to that by the right masker for the left-strong and left-right-balanced conditions. Furthermore, the forward suppression by the binaural masker would be expected to be superior to that by the left masker owing to additional afferent activity from the right ear. Thus, the current results suggest that the forward suppression by ipsilateral maskers is superior to that by contralateral maskers although both maskers evoked the N1m amplitudes to the same degree. Additional masker at the contralateral ear can attenuate the forward suppression by the ipsilateral masker.  相似文献   

4.
The presence of non-simultaneous maskers can result in strong impairment in auditory intensity resolution relative to a condition without maskers, and causes a complex pattern of effects that is difficult to explain on the basis of peripheral processing. We suggest that the failure of selective attention to the target tones is a useful framework for understanding these effects. Two experiments tested the hypothesis that the sequential grouping of the targets and the maskers into separate auditory objects facilitates selective attention and therefore reduces the masker-induced impairment in intensity resolution. In Experiment 1, a condition favoring the processing of the maskers and the targets as two separate auditory objects due to grouping by temporal proximity was contrasted with the usual forward masking setting where the masker and the target presented within each observation interval of the two-interval task can be expected to be grouped together. As expected, the former condition resulted in a significantly smaller masker-induced elevation of the intensity difference limens (DLs). In Experiment 2, embedding the targets in an isochronous sequence of maskers led to a significantly smaller DL-elevation than control conditions not favoring the perception of the maskers as a separate auditory stream. The observed effects of grouping are compatible with the assumption that a precise representation of target intensity is available at the decision stage, but that this information is used only in a suboptimal fashion due to limitations of selective attention. The data can be explained within a framework of object-based attention. The results impose constraints on physiological models of intensity discrimination. We discuss candidate structures for physiological correlates of the psychophysical data.  相似文献   

5.
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.  相似文献   

6.
Goense JB  Feng AS 《PloS one》2012,7(2):e31589
Natural auditory scenes such as frog choruses consist of multiple sound sources (i.e., individual vocalizing males) producing sounds that overlap extensively in time and spectrum, often in the presence of other biotic and abiotic background noise. Detection of a signal in such environments is challenging, but it is facilitated when the noise shares common amplitude modulations across a wide frequency range, due to a phenomenon called comodulation masking release (CMR). Here, we examined how properties of the background noise, such as its bandwidth and amplitude modulation, influence the detection threshold of a target sound (pulsed amplitude modulated tones) by single neurons in the frog auditory midbrain. We found that for both modulated and unmodulated masking noise, masking was generally stronger with increasing bandwidth, but it was weakened for the widest bandwidths. Masking was less for modulated noise than for unmodulated noise for all bandwidths. However, responses were heterogeneous, and only for a subpopulation of neurons the detection of the probe was facilitated when the bandwidth of the modulated masker was increased beyond a certain bandwidth - such neurons might contribute to CMR. We observed evidence that suggests that the dips in the noise amplitude are exploited by TS neurons, and observed strong responses to target signals occurring during such dips. However, the interactions between the probe and masker responses were nonlinear, and other mechanisms, e.g., selective suppression of the response to the noise, may also be involved in the masking release.  相似文献   

7.
For a gleaning bat hunting prey from the ground, rustling sounds generated by prey movements are essential to invoke a hunting behaviour. The detection of prey-generated rustling sounds may depend heavily on the time structure of the prey-generated and the masking sounds due to their spectral similarity. Here, we systematically investigate the effect of the temporal structure on psychophysical rustling-sound detection in the gleaning bat, Megaderma lyra. A recorded rustling sound serves as the signal; the maskers are either Gaussian noise or broadband noise with various degrees of envelope fluctuations. Exploratory experiments indicate that the selective manipulation of the temporal structure of the rustling sound does not influence its detection in a Gaussian-noise masker. The results of the main experiment show, however, that the temporal structure of the masker has a strong and systematic effect on rustling-sound detection: When the width of irregularly spaced gaps in the masker exceeded about 0.3 ms, rustling-sound detection improved monotonically with increasing gap duration. Computer simulations of this experiment reveal that a combined detection strategy of spectral and temporal analysis underlies rustling-sound detection with fluctuating masking sounds.  相似文献   

8.
The goal of the study was to enlarge knowledge of discrimination of complex sound signals by the auditory system in masking noise. For that, influence of masking noise on detection of shift of rippled spectrum was studied in normal listeners. The signal was a shift of ripple phase within a 0.5-oct wide rippled spectrum centered at 2 kHz. The ripples were frequency-proportional (throughout the band, ripple spacing was a constant proportion of the ripple center frequency). Simultaneous masker was a 0.5-oct noise below-, on-, or above the signal band. Both the low-frequency (center frequency 1 kHz) and on-frequency (the same center frequency as for the signal) maskers increased the thresholds for detecting ripple phase shift. However, the threshold dependence on the masker level was different for these two maskers. For the on-frequency masker, the masking effect primarily depended on the masker/signal ratio: the threshold steeply increased at a ratio of 5 dB, and no shift was detectable at a ratio of 10 dB. For the low-frequency masker, the masking effect primarily depended on the masker level: the threshold increased at a masker level of 80 dB SPL, and no shift was detectable at a masker level of 90 dB (for a signal level of 50 dB) or 100 dB (for a signal level of 80 dB). The high-frequency masker had little effect. The data were successfully simulated using an excitation-pattern model. In this model, the effect of the on-frequency masker appeared to be primarily due to a decrease of ripple depth. The effect of the low-frequency masker appeared due to widening of the auditory filters at high sound levels.  相似文献   

9.
Summary Hearing sensitivity and psychophysical tuning curves were determined for the mormyridGnathonemus petersii. Pure tone hearing thresholds were determined from 100 Hz to 2,500 Hz, with best sensitivity being about –31 dB (re: 1 dyne/ cm2) from 300 Hz to 1,000 Hz. In order to determine frequency tuning of the auditory system, psychophysical tuning curves (PTC's) were measured with the masker presented simultaneously with, or just ahead of, the 500 Hz test signal. The sound level for different frequencies needed to just mask the test tone were determined from 100 to 800 Hz. Maximum masking occurred in both forward and simultaneous conditions when the masker and the test tone were at the same frequency. As the masker was moved in frequency from 500 Hz, higher sound levels of maskers were needed to afford masking of the test tone. The data were similar in simultaneous and forward masking, with theQ 10 dB, a measure of sharpness of tuning, being about 5 in both cases. Data were compared for other species for which behavioral thresholds and PTC's are available.Gnathonemus hears about as wide a range of frequencies as the goldfish,Carassius auratus, although the PTC's for the two species are strikingly different. The PTC's forGnathonemus resemble those determined in a forward-masking paradigm for the clown knife fish,Notopterus chitala, even thoughGnathonemus has a wider hearing bandwidth.Abbreviations AM amplitude modulated - EOD electric organ discharge - PTC psychophysical tuning curve  相似文献   

10.
Due to its extended low-frequency hearing, the Mongolian gerbil (Meriones unguiculatus) has become a well-established animal model for human auditory processing. Here, two experiments are presented which quantify the gerbil’s sensitivity to amplitude modulation (AM) and carrier periodicity (CP) in broad-band stimuli. Two additional experiments investigate a possible interaction of the two types of periodicity. The results show that overall sensitivity to AM and CP is considerably less than in humans (by at least 10 dB). The gerbil’s amplitude-modulation sensitivity is almost independent of modulation frequency up to a modulation frequency of 1 kHz. Above, amplitude-modulation sensitivity deteriorates dramatically. On the basis of individual animals, carrier-periodicity detection may improve with increasing fundamental frequency up to about 500 Hz or may be independent of fundamental frequency. Amplitude-modulation thresholds are consistent with the hypothesis that intensity difference limens in the gerbil may be considerably worse than in humans, leading to the relative insensitivity for low modulation frequencies. Unlike in humans, inner-ear filtering appears not to limit amplitude-modulation sensitivity in the gerbil. Carrier-periodicity sensitivity changes with fundamental frequency similar to humans. Unlike in humans, there is no systematic interaction between AM and CP in the gerbil. This points to a relatively independent processing of the perceptual cues associated with AM and CP.  相似文献   

11.
The auditory system creates a neuronal representation of the acoustic world based on spectral and temporal cues present at the listener''s ears, including cues that potentially signal the locations of sounds. Discrimination of concurrent sounds from multiple sources is especially challenging. The current study is part of an effort to better understand the neuronal mechanisms governing this process, which has been termed “auditory scene analysis”. In particular, we are interested in spatial release from masking by which spatial cues can segregate signals from other competing sounds, thereby overcoming the tendency of overlapping spectra and/or common temporal envelopes to fuse signals with maskers. We studied detection of pulsed tones in free-field conditions in the presence of concurrent multi-tone non-speech maskers. In “energetic” masking conditions, in which the frequencies of maskers fell within the ±1/3-octave band containing the signal, spatial release from masking at low frequencies (∼600 Hz) was found to be about 10 dB. In contrast, negligible spatial release from energetic masking was seen at high frequencies (∼4000 Hz). We observed robust spatial release from masking in broadband “informational” masking conditions, in which listeners could confuse signal with masker even though there was no spectral overlap. Substantial spatial release was observed in conditions in which the onsets of the signal and all masker components were synchronized, and spatial release was even greater under asynchronous conditions. Spatial cues limited to high frequencies (>1500 Hz), which could have included interaural level differences and the better-ear effect, produced only limited improvement in signal detection. Substantially greater improvement was seen for low-frequency sounds, for which interaural time differences are the dominant spatial cue.  相似文献   

12.
Summary Echolocating bats behave as though they perceive the crosscorrelation functions between their sonar transmissions and echoes as images of targets, at least with respect to perception of target range, horizontal direction, and shape. These data imply that bats use a multi-dimensional acoustic imaging system for echolocation with broadband, usually frequencymodulated signals. The perceptual structure of the echolocation signals used by different species of bats was investigated using the crosscorrelation functions between emitted signals and returning echoes as indices of perceptual acuity.Thebandwidth andaverage period of echolocation signals are identified as the principal acoustic features of broadband sonar waveforms that determine the quality of target perceptions. The multiple-harmonic structure of echolocation sounds, which is characteristic of the broadband signals of the majority of species of bats, yields a lower average period (separation of peaks in the crosscorrelation function) than would be expected from the average frequency of the signal as a whole, sharpening target localization.The frequency-modulation of the harmonics in the sonar sounds of bats reduces the heights of side-peaks in the crosscorrelation functions of the signals, promoting sharp, unambiguous determination of target position, and leads to the well-known coupling of perception of range and velocity for moving targets. The shapes of the frequency sweeps and bandwidths of frequency modulation contribute to reducing this range-velocity coupling. Harmonic organization nearly eliminates range-velocity coupling.The use of multiple-harmonics and fairly broad frequency modulation in sonar signals yields especially sharp resolution of target position to reject clutter interference. Such signals are commonly used by bats in cluttered environments. Very broad frequency sweeps with fewer harmonics may accomplish the same effect, but the low signal periodicity contributed by harmonic structure is an important factor in banishing side-peaks in the crosscorrelation function from perception.Abbreviations ACR autocorrelation function - AMB ambiguity diagram - CF constant frequency - FM frequency modulated - LFM linear frequency sweep - LPM linear period sweep - XCR crosscorrelation function  相似文献   

13.
In humans, auditory perception reaches maturity over a broad age range, extending through adolescence. Despite this slow maturation, children are considered to be outstanding learners, suggesting that immature perceptual skills might actually be advantageous to improvement on an acoustic task as a result of training (perceptual learning). Previous non‐human studies have not employed an identical task when comparing perceptual performance of young and mature subjects, making it difficult to assess learning. Here, we used an identical procedure on juvenile and adult gerbils to examine the perception of amplitude modulation (AM), a stimulus feature that is an important component of most natural sounds. On average, Adult animals could detect smaller fluctuations in amplitude (i.e., smaller modulation depths) than Juveniles, indicating immature perceptual skills in Juveniles. However, the population variance was much greater for Juveniles, a few animals displaying adult‐like AM detection. To determine whether immature perceptual skills facilitated learning, we compared naïve performance on the AM detection task with the amount of improvement following additional training. The amount of improvement in Adults correlated with naïve performance: those with the poorest naïve performance improved the most. In contrast, the naïve performance of Juveniles did not predict the amount of learning. Those Juveniles with immature AM detection thresholds did not display greater learning than Adults. Furthermore, for several of the Juveniles with adult‐like thresholds, AM detection deteriorated with repeated testing. Thus, immature perceptual skills in young animals were not associated with greater learning. © 2010 Wiley Periodicals, Inc. Develop Neurobiol 70: 636–648, 2010  相似文献   

14.
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.  相似文献   

15.
Periodic envelope or amplitude modulations (AM) with periodicities up to several thousand Hertz are characteristic for many natural sounds. Throughout the auditory pathway, signal periodicity is evident in neuronal discharges phase-locked to the envelope. In contrast to lower levels of the auditory pathway, cortical neurons do not phase-lock to periodicities above about 100 Hz. Therefore, we investigated alternative coding strategies for high envelope periodicities at the cortical level. Neuronal responses in the primary auditory cortex (AI) of gerbils to tones and AM were analysed. Two groups of stimuli were tested: (1) AM with a carrier frequency set to the unit's best frequency evoked phase-locked responses which were confined to low modulation frequencies (fms) up to about 100 Hz, and (2) AM with a spectrum completely outside the unit's frequency-response range evoked completely different responses that never showed phase-locking but a rate-tuning to high fms (50 to about 3000 Hz). In contrast to the phase-locked responses, the best fms determined from these latter responses appeared to be topographically distributed, reflecting a periodotopic organization in the AI. Implications of these results for the cortical representation of the perceptual qualities rhythm, roughness and pitch are discussed. Accepted: 25 July 1997  相似文献   

16.
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.  相似文献   

17.
McDermott HJ 《PloS one》2011,6(7):e22358

Background

Recently two major manufacturers of hearing aids introduced two distinct frequency-lowering techniques that were designed to compensate in part for the perceptual effects of high-frequency hearing impairments. The Widex “Audibility Extender” is a linear frequency transposition scheme, whereas the Phonak “SoundRecover” scheme employs nonlinear frequency compression. Although these schemes process sound signals in very different ways, studies investigating their use by both adults and children with hearing impairment have reported significant perceptual benefits. However, the modifications that these innovative schemes apply to sound signals have not previously been described or compared in detail.

Methods

The main aim of the present study was to analyze these schemes''technical performance by measuring outputs from each type of hearing aid with the frequency-lowering functions enabled and disabled. The input signals included sinusoids, flute sounds, and speech material. Spectral analyses were carried out on the output signals produced by the hearing aids in each condition.

Conclusions

The results of the analyses confirmed that each scheme was effective at lowering certain high-frequency acoustic signals, although both techniques also distorted some signals. Most importantly, the application of either frequency-lowering scheme would be expected to improve the audibility of many sounds having salient high-frequency components. Nevertheless, considerably different perceptual effects would be expected from these schemes, even when each hearing aid is fitted in accordance with the same audiometric configuration of hearing impairment. In general, these findings reinforce the need for appropriate selection and fitting of sound-processing schemes in modern hearing aids to suit the characteristics and preferences of individual listeners.  相似文献   

18.
Summary Response characteristics of 130 single neurons in the superior olivary nucleus of the northern leopard frog (Rana pipiens pipiens) were examined to determine their selectivity to various behaviorally relevant temporal parameters [rise-fall time, duration, and amplitude modulation (AM) rate of acoustic signals. Response functions were constructed with respect to each of these variables. Neurons with different temporal firing patterns such as tonic, phasic or phasic-burst firing patterns, participated in time domain analysis in specific manners. Phasic neurons manifested preferences for signals with short rise-fall times, thus possessing low-pass response functions with respect to this stimulus parameter; conversely, tonic and phasic-burst units were non-selective and possessed all-pass response functions. A distinction between temporal firing patterns was also observed for duration coding. Whereas phasic units showed no change in the mean spike count with a change in stimulus duration (i.e., all-pass duration response functions), tonic and phasic-burst units gave higher mean spike counts with an increase in stimulus duration (i.e., primary-like high-pass response functions). Phasic units manifested greater response selectivity for AM rate than did tonic or phasic-burst units, and many phasic units were tuned to a narrow range of modulation rates (i.e., band-pass). The results suggest that SON neurons play an important role in the processing of complex acoustic patterns; they perform extensive computations on AM rate as well as other temporal parameters of complex sounds. Moreover, the response selectivities for rise-fall time, duration, and AM rate could often be shown to contribute to the differential responses to complex synthetic and natural sounds.Abbreviations SON superior olivary nucleus - DMN dorsal medullary nucleus - TS torus semicircularis - FTC frequency threshold curve - BF best excitatory frequency - PAM pulsatile amplitude modulation - SAM sinusoidal amplitude modulation - SQAM square-wave amplitude modulation - MTF modulation transfer function - PSTH peri-stimulus time histogram  相似文献   

19.
The audibility of a target tone in a multitone background masker is enhanced by the presentation of a precursor sound consisting of the masker alone. There is evidence that precursor-induced neural adaptation plays a role in this perceptual enhancement. However, the precursor may also be strategically used by listeners as a spectral template of the following masker to better segregate it from the target. In the present study, we tested this hypothesis by measuring the audibility of a target tone in a multitone masker after the presentation of precursors which, in some conditions, were made dissimilar to the masker by gating their components asynchronously. The precursor and the following sound were presented either to the same ear or to opposite ears. In either case, we found no significant difference in the amount of enhancement produced by synchronous and asynchronous precursors. In a second experiment, listeners had to judge whether a synchronous multitone complex contained exactly the same tones as a preceding precursor complex or had one tone less. In this experiment, listeners performed significantly better with synchronous than with asynchronous precursors, showing that asynchronous precursors were poorer perceptual templates of the synchronous multitone complexes. Overall, our findings indicate that precursor-induced auditory enhancement cannot be fully explained by the strategic use of the precursor as a template of the following masker. Our results are consistent with an explanation of enhancement based on selective neural adaptation taking place at a central locus of the auditory system.  相似文献   

20.
Several mass strandings of beaked whales have recently been correlated with military exercises involving mid-frequency sonar highlighting unknowns regarding hearing sensitivity in these species. We report the hearing abilities of a stranded juvenile beaked whale (Mesoplodon europaeus) measured with auditory evoked potentials. The beaked whale’s modulation rate transfer function (MRTF) measured with a 40-kHz carrier showed responses up to an 1,800 Hz amplitude modulation (AM) rate. The MRTF was strongest at the 1,000 and 1,200 Hz AM rates. The envelope following response (EFR) input–output functions were non-linear. The beaked whale was most sensitive to high frequency signals between 40 and 80 kHz, but produced smaller evoked potentials to 5 kHz, the lowest frequency tested. The beaked whale hearing range and sensitivity are similar to other odontocetes that have been measured.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号