首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.

Background

When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl''s ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests.

Methodology/Principal Findings

HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes.

Conclusions/Significance

The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso–ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.  相似文献   

2.
The effect of binaural decorrelation on the processing of interaural level difference cues in the barn owl (Tyto alba) was examined behaviorally and electrophysiologically. The electrophysiology experiment measured the effect of variations in binaural correlation on the first stage of interaural level difference encoding in the central nervous system. The responses of single neurons in the posterior part of the ventral nucleus of the lateral lemniscus were recorded to stimulation with binaurally correlated and binaurally uncorrelated noise. No significant differences in interaural level difference sensitivity were found between conditions. Neurons in the posterior part of the ventral nucleus of the lateral lemniscus encode the interaural level difference of binaurally correlated and binaurally uncorrelated noise with equal accuracy and precision. This nucleus therefore supplies higher auditory centers with an undegraded interaural level difference signal for sound stimuli that lack a coherent interaural time difference. The behavioral experiment measured auditory saccades in response to interaural level differences presented in binaurally correlated and binaurally uncorrelated noise. The precision and accuracy of sound localization based on interaural level difference was reduced but not eliminated for binaurally uncorrelated signals. The observation that barn owls continue to vary auditory saccades with the interaural level difference of binaurally uncorrelated stimuli suggests that neurons that drive head saccades can be activated by incomplete auditory spatial information.  相似文献   

3.
Traditionally, the medial superior olive, a mammalian auditory brainstem structure, is considered to encode interaural time differences, the main cue for localizing low-frequency sounds. Detection of binaural excitatory and inhibitory inputs are considered as an underlying mechanism. Most small mammals, however, hear high frequencies well beyond 50 kHz and have small interaural distances. Therefore, they can not use interaural time differences for sound localization and yet possess a medial superior olive. Physiological studies in bats revealed that medial superior olive cells show similar interaural time difference coding as in larger mammals tuned to low-frequency hearing. Their interaural time difference sensitivity, however, is far too coarse to serve in sound localization. Thus, interaural time difference sensitivity in medial superior olive of small mammals is an epiphenomenon. We propose that the original function of the medial superior olive is a binaural cooperation causing facilitation due to binaural excitation. Lagging inhibitory inputs, however, suppress reverberations and echoes from the acoustic background. Thereby, generation of antagonistically organized temporal fields is the basic and original function of the mammalian medial superior olive. Only later in evolution with the advent of larger mammals did interaural distances, and hence interaural time differences, became large enough to be used as cues for sound localization of low-frequency stimuli. Accepted: 28 February 2000  相似文献   

4.
Small songbirds have a difficult analysis problem: their head is small compared to the wavelengths of sounds used for communication providing only small interaural time and level differences. Klump and Larsen (1992) measured the physical binaural cues in the European starling (Sturnus vulgaris) that allow the comparison of acoustical cues and perception. We determined the starling’s minimum audible angle (MAA) in an operant Go/NoGo procedure for different spectral and temporal stimulus conditions. The MAA for broadband noise with closed-loop localization reached 17°, while the starling’s MAA for open-loop localization of broadband noise reached 29°. No substantial difference between open-loop and closed-loop localization was found in 2 kHz pure tones. The closed-loop MAA improved from 26° to 19° with an increase in pure tone frequency from 1 to 4 kHz. This finding is in line with the physical cues available. While the starlings can only make use of interaural time difference cues at lower frequencies (e.g., 1 and 2 kHz), additional interaural level difference cues become available at higher frequencies (e.g., 4 kHz or higher, Klump and Larsen 1992). An improvement of the starling’s MAA with an increasing number of standard stimulus presentations prior to the test stimulus has important implications for determining relative (MAA) localization thresholds.  相似文献   

5.
Standard electrophysiology and virtual auditory stimuli were used to investigate the influence of interaural time difference on the azimuthal tuning of neurons in the core and the lateral shell of the central nucleus of the inferior colliculus of the barn owl. The responses of the neurons to virtual azimuthal stimuli depended in a periodic way on azimuth. Fixation of the interaural time difference, while leaving all other spatial cues unchanged, caused a loss of periodicity and a broadening of azimuthal tuning. This effect was studied in more detail in neurons of the core. The azimuthal range tested and the frequency selectivity of the neurons were additional parameters influencing the changes induced by fixating the interaural time difference. The addition of an interaural time difference to the virtual stimuli resulted in a shift of the tuning curves that correlated with the interaural time difference added. In this condition, tuning strength did not change. These results suggest that interaural time difference is an important determinant of azimuthal tuning in all neurons of the core and lateral shell of the central nucleus of the inferior colliculus, and is the only determinant in many of the neurons from the core.  相似文献   

6.
In this work we study the influence and relationship of five different acoustical cues to the human sound localisation process. These cues are: interaural time delay, interaural level difference, interaural spectrum, monaural spectrum, and band-edge spectral contrast. Of particular interest was the synthesis and integration of the different cues to produce a coherent and robust percept of spatial location. The relative weighting and role of the different cues was investigated using band-pass filtered white noise with a frequency range (in kHz) of: 0.3-5, 0.3-7, 0.3-10, 0.3-14, 3-8, 4-9, and 7-14. These stimuli provided varying amounts of spectral information and physiologically detectable temporal information, thus probing the localisation process under varying sound conditions. Three subjects with normal hearing in both ears have performed five trials of 76 test positions for each of these stimuli in an anechoic room. All subjects showed systematic mislocalisation on most of these stimuli. The location to which they are mislocalised varies among subjects but in a systematic manner related to the five different acoustical cues. These cues have been correlated with the subject's localisation responses on an individual basis with the results suggesting that the internal weighting of the spectral cues may vary with the sound condition.  相似文献   

7.
Two potential sensory cues for sound location are interaural difference in response strength (firing rate and/or spike count) and in response latency of auditory receptor neurons. Previous experiments showed that these two cues are affected differently by intense prior stimulation; the difference in response strength declines and may even reverse in sign, but the difference in latency is unaffected. Here, I use an intense, constant tone to disrupt localization cues generated by a subsequent train of sound pulses. Recordings from the auditory nerve confirm that tone stimulation reduces, and sometimes reverses, the interaural difference in response strength to subsequent sound pulses, but that it enhances the interaural latency difference. If sound location is determined mainly from latency comparison, then behavioral responses to a pulse train following tone stimulation should be normal, but if the main cue for sound location is interaural difference in response strength, then post-tone behavioral responses should sometimes be misdirected. Initial phonotactic responses to the post-tone pulse train were frequently directed away from, rather than towards, the sound source, indicating that the dominant sensory cue for sound location is interaural difference in response strength.  相似文献   

8.
Auditory receptors of the locust (Locusta migratoria) were investigated with respect to the directionality cues which are present in their spiking responses, with special emphasis on how directional cues are influenced by the rise time of sound signals. Intensity differences between the ears influence two possible cues in the receptor responses, spike count and response latency. Variation in rise time of sound pulses had little effect on the overall spike count; however, it had a substantial effect on the temporal distribution of the receptor's spiking response, especially on the latencies of first spikes. In particular, with ramplike stimuli the slope of the latency vs. intensity curves was steeper as compared to stimuli with steep onsets (Fig. 3). Stimuli with flat ramplike onsets lead to an increase of the latency differences of discharges between left and right tympanic receptors. This type of ramplike stimulus could thus facilitate directional hearing. This hypothesis was corroborated by a Monte Carlo simulation in which the probability of incorrect directional decisions was determined on the basis of the receptor latencies and spike counts. Slowly rising ramps significantly improved the decisions based on response latency, as compared to stimuli with sudden onsets (Fig. 4). These results are compared to behavioural results obtained with the grasshopper Ch. biguttulus. The stridulation signals of the females of this species consist of ramplike pulses, which could be an adaptation to facilitate directional hearing of phonotactically approaching males.Abbreviations HFR high frequency receptor - ILD interaural level difference - LFR low frequency receptor - SPL sound pressure level - WN white noise  相似文献   

9.
10.
Summary The directionality of cochlear microphonic potentials in the azimuthal plane was investigated in the pigeon (Columba livia), using acoustic free-field stimulation (pure tones of 0.25–6 kHz).At high frequencies in the pigeon's hearing range (4–6 kHz), changing azimuth resulted in a maximum change of the cochlear microphonic amplitude by about 20 dB (SPL). The directionality decreased clearly with decreasing frequency.Acoustic blocking of the contralateral ear canal could reduce the directional sensitivity of the ipsilateral ear by maximally 8 dB. This indicates a significant sound transmission through the bird's interaural pathways. However, the magnitude of these effects compared to those obtained by sound diffraction (maximum > 15 dB) suggests that pressure gradients at the tympanic membrane are only of subordinate importance for the generation of directional cues.The comparison of interaural intensity differences with previous behavioral results confirms the hypothesis that interaural intensity difference is the primary directional cue of azimuthal sound localization in the high-frequency range (2–6 kHz).Abbreviations CM cochlear microphonic potential - IID interaural intensity difference - IID-MRA minimum resolvable angle calculated from interaural intensity difference - MRA minimum resolvable angle - OTD interaural ongoing time difference - RMS root mean square - SPL sound pressure level  相似文献   

11.
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl’s sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl’s localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.  相似文献   

12.
Barn owls use interaural intensity differences to localize sounds in the vertical plane. At a given elevation the magnitude of the interaural intensity difference cue varies with frequency, creating an interaural intensity difference spectrum of cues which is characteristic of that direction. To test whether space-specific cells are sensitive to spectral interaural intensity difference cues, pure-tone interaural intensity difference tuning curves were taken at multiple different frequencies for single neurons in the external nucleus of the inferior colliculus. For a given neuron, the interaural intensity differences eliciting the maximum response (the best interaural intensity differences) changed with the frequency of the stimulus by an average maximal difference of 9.4±6.2 dB. The resulting spectral patterns of these neurally preferred interaural intensity differences exhibited a high degree of similarity to the acoustic interaural intensity difference spectra characteristic of restricted regions in space. Compared to stimuli whose interaural intensity difference spectra matched the preferred spectra, stimuli with inverted spectra elicited a smaller response, showing that space-specific neurons are sensitive to the shape of the spectrum. The underlying mechanism is an inhibition for frequency-specific interaural intensity differences which differ from the preferred spectral pattern. Collectively, these data show that space-specific neurons are sensitive to spectral interaural intensity difference cues and support the idea that behaving barn owls use such cues to precisely localize sounds.Abbreviations ABI average binaural intensity - HRTF head-related transfer function - ICx external nucleus of the inferior colliculus - IID interaural intensity difference - ITD interaural time difference - OT optic tectum - RMS root mean square - VLVp nucleus ventralis lemnisci laterale, pars posterior  相似文献   

13.
There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing.  相似文献   

14.
Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13). This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.  相似文献   

15.
Accurate sound source localization in three-dimensional space is essential for an animal’s orientation and survival. While the horizontal position can be determined by interaural time and intensity differences, localization in elevation was thought to require external structures that modify sound before it reaches the tympanum. Here we show that in birds even without external structures like pinnae or feather ruffs, the simple shape of their head induces sound modifications that depend on the elevation of the source. Based on a model of localization errors, we show that these cues are sufficient to locate sounds in the vertical plane. These results suggest that the head of all birds induces acoustic cues for sound localization in the vertical plane, even in the absence of external ears.  相似文献   

16.
Bi-coordinate sound localization by the barn owl   总被引:6,自引:3,他引:3  
1. Binaurally time-shifted and intensity-unbalanced noise, delivered through earphones, induced owls to respond with a head-orienting behavior similar to that which occurs to free field auditory stimuli. 2. Owls derived the azimuthal and elevational coordinates of a sound from a combination of interaural time difference (ITD) and interaural intensity difference (IID). 3. IID and ITD each contained information about the azimuth and elevation of the signal. Thus, IID and ITD formed a coordinate system in which the axes were non-orthogonal. 4. ITD was a strong determinant of azimuth, and IID was a strong determinant of elevation, of elicited head turn.  相似文献   

17.
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.  相似文献   

18.
Stereo vision and sound localization are behaviors that help organisms to orient in space. Stereo vision provides information about the distance of an object. In sound localization mainly directional signals are analyzed. Two specific cues, binocular disparities underlying stereo vision, and the interaural time difference, suited to represent azimuth in sound localization, have many computational problems in common, although they are generated in two different modalities. The extraction of both cues requires a comparison of signals that arise from two independent sensors, called two-sensor comparison here. This two-sensor comparison is achieved by algorithms similar to summation, half-wave rectification and multiplication. Since the underlying neurons are band-pass filters, the two-sensor comparison results in ambiguities. These are removed in a hierarchical way in several computational steps, involving squaring, inhibition and across-frequency integration.  相似文献   

19.
In recent years, a great deal of research within the field of sound localization has been aimed at finding the acoustic cues that human listeners use to localize sounds and understanding the mechanisms by which they process these cues. In this paper, we propose a complementary approach by constructing an ideal-observer model, by which we mean a model that performs optimal information processing within a Bayesian context. The model considers all available spatial information contained within the acoustic signals encoded by each ear. Parameters for the optimal Bayesian model are determined based on psychoacoustic discrimination experiments on interaural time difference and sound intensity. Without regard as to how the human auditory system actually processes information, we examine the best possible localization performance that could be achieved based only on analysis of the input information, given the constraints of the normal auditory system. We show that the model performance is generally in good agreement with the actual human localization performance, as assessed in a meta-analysis of many localization experiments (Best et al. in Principles and applications of spatial hearing, pp 14–23. World Scientific Publishing, Singapore, 2011). We believe this approach can shed new light on the optimality (or otherwise) of human sound localization, especially with regard to the level of uncertainty in the input information. Moreover, the proposed model allows one to study the relative importance of various (combinations of) acoustic cues for spatial localization and enables a prediction of which cues are most informative and therefore likely to be used by humans in various circumstances.  相似文献   

20.
Sound localization relies on minute differences in the timing and intensity of sound arriving at both ears. Neurons of the lateral superior olive (LSO) in the brainstem process these interaural disparities by precisely detecting excitatory and inhibitory synaptic inputs. Aging generally induces selective loss of inhibitory synaptic transmission along the entire auditory pathways, including the reduction of inhibitory afferents to LSO. Electrophysiological recordings in animals, however, reported only minor functional changes in aged LSO. The perplexing discrepancy between anatomical and physiological observations suggests a role for activity-dependent plasticity that would help neurons retain their binaural tuning function despite loss of inhibitory inputs. To explore this hypothesis, we use a computational model of LSO to investigate mechanisms underlying the observed functional robustness against age-related loss of inhibitory inputs. The LSO model is an integrate-and-fire type enhanced with a small amount of low-voltage activated potassium conductance and driven with (in)homogeneous Poissonian inputs. Without synaptic input loss, model spike rates varied smoothly with interaural time and level differences, replicating empirical tuning properties of LSO. By reducing the number of inhibitory afferents to mimic age-related loss of inhibition, overall spike rates increased, which negatively impacted binaural tuning performance, measured as modulation depth and neuronal discriminability. To simulate a recovery process compensating for the loss of inhibitory fibers, the strength of remaining inhibitory inputs was increased. By this modification, effects of inhibition loss on binaural tuning were considerably weakened, leading to an improvement of functional performance. These neuron-level observations were further confirmed by population modeling, in which binaural tuning properties of multiple LSO neurons were varied according to empirical measurements. These results demonstrate the plausibility that homeostatic plasticity could effectively counteract known age-dependent loss of inhibitory fibers in LSO and suggest that behavioral degradation of sound localization might originate from changes occurring more centrally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号