首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In nature, sounds from objects of interest arrive at the ears accompanied by sound waves from other actively emitting objects and by reflections off of nearby surfaces. Despite the fact that all of these waveforms sum at the eardrums, humans with normal hearing effortlessly segregate one sound source from another. Our laboratory is investigating the neural basis of this perceptual feat, often called the "cocktail party effect", using the barn owl as an animal model. The barn owl, renowned for its ability to localize sounds and its spatiotopic representation of auditory space, is an established model for spatial hearing. Here, we briefly review the neural basis of sound-localization of a single sound source in an anechoic environment and then generalize the ideas developed therein to cases in which there are multiple, concomitant sound sources and acoustical reflection.  相似文献   

2.
The effect of binaural decorrelation on the processing of interaural level difference cues in the barn owl (Tyto alba) was examined behaviorally and electrophysiologically. The electrophysiology experiment measured the effect of variations in binaural correlation on the first stage of interaural level difference encoding in the central nervous system. The responses of single neurons in the posterior part of the ventral nucleus of the lateral lemniscus were recorded to stimulation with binaurally correlated and binaurally uncorrelated noise. No significant differences in interaural level difference sensitivity were found between conditions. Neurons in the posterior part of the ventral nucleus of the lateral lemniscus encode the interaural level difference of binaurally correlated and binaurally uncorrelated noise with equal accuracy and precision. This nucleus therefore supplies higher auditory centers with an undegraded interaural level difference signal for sound stimuli that lack a coherent interaural time difference. The behavioral experiment measured auditory saccades in response to interaural level differences presented in binaurally correlated and binaurally uncorrelated noise. The precision and accuracy of sound localization based on interaural level difference was reduced but not eliminated for binaurally uncorrelated signals. The observation that barn owls continue to vary auditory saccades with the interaural level difference of binaurally uncorrelated stimuli suggests that neurons that drive head saccades can be activated by incomplete auditory spatial information.  相似文献   

3.
The human brain has accumulated many useful building blocks over its evolutionary history, and the best knowledge of these has often derived from experiments performed in animal species that display finely honed abilities. In this article we review a model system at the forefront of investigation into the neural bases of information processing, plasticity, and learning: the barn owl auditory localization pathway. In addition to the broadly applicable principles gleaned from three decades of work in this system, there are good reasons to believe that continued exploration of the owl brain will be invaluable for further advances in understanding of how neuronal networks give rise to behavior.  相似文献   

4.
The barn owl (Tyto alba) possesses several specializations regarding auditory processing. The most conspicuous features are the directionally sensitive facial ruff and the asymmetrically arranged ears. The frequency-specific influence of these features on sound has consequences for sound localization that might differ between low and high frequencies. Whereas the high-frequency range (>3 kHz) is well investigated, less is known about the characteristics of head-related transfer functions for frequencies below 3 kHz. In the present study, we compared 1/3 octaveband-filtered transfer functions of barn owls with center frequencies ranging from 0.5 to 9 kHz. The range of interaural time differences was 600 μs at frequencies above 4 kHz, decreased to 505 μs at 3 kHz and increased again to about 615 μs at lower frequencies. The ranges for very low (0.5–1 kHz) and high frequencies (5–9 kHz) were not statistically different. Interaural level differences and monaural gains increased monotonically with increasing frequency. No systematic influence of the body temperature on the measured localization cues was observed. These data have implications for the mechanism underlying sound localization and we suggest that the barn owl’s ears work as pressure receivers both in the high- and low-frequency ranges.  相似文献   

5.

Background

Barn owls integrate spatial information across frequency channels to localize sounds in space.

Methodology/Principal Findings

We presented barn owls with synchronous sounds that contained different bands of frequencies (3–5 kHz and 7–9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.

Conclusions/Significance

We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.  相似文献   

6.
Neural processing: the logic of multiplication in single neurons   总被引:1,自引:0,他引:1  
Theory indicates that neural networks can derive considerable computational power from a simple multiplication of their inputs, but the extent to which real neurons do this is unclear. A recent study of the auditory localization pathway of the barn owl has shed new light on this important question.  相似文献   

7.
Absolute thresholds and critical masking ratios were determined behaviorally for the European barn owl (Tyto alba guttata). It shows an excellent sensitivity throughout its hearing range with a minimum threshold of −14.2 dB sound pressure level at 6.3 kHz, which is similar to the sensitivity found in the American barn owl (Tyto alba pratincola) and some other owls. Both the European and the American barn owl have a high upper-frequency limit of hearing exceeding that in other bird species. Critical masking ratios, that can provide an estimate for the frequency selectivity in the barn owl's hearing system, were determined with a noise of about 0 dB spectrum level. They increased from 19.1 dB at 2 kHz to 29.2 dB at 8 kHz at a rate of 5.1 dB per octave. The corresponding critical ratio bandwidths were 81, 218, 562 and 831 Hz for test-tone frequencies of 2, 4, 6.3 and 8 kHz, respectively. These values indicate, contrary to expectations based on the spatial representation of frequencies on the basilar papilla, increasing bandwidths of auditory filters in the region of the barn owl's auditory fovea. This increase, however, correlates with the increase in the bandwidths of tuning curves in the barn owl's auditory fovea. Accepted: 27 November 1997  相似文献   

8.
We studied the influence of frequency on sound localization in free-flying barn owls by quantifying aspects of their target-approaching behavior to a distant sound source during ongoing auditory stimulation. In the baseline condition with a stimulus covering most of the owls hearing range (1–10 kHz), all owls landed within a radius of 20 cm from the loudspeaker in more than 80% of the cases and localization along the azimuth was more accurate than localization in elevation. When the stimulus contained only high frequencies (>5 kHz) no changes in striking behavior were observed. But when only frequencies from 1 to 5 kHz were presented, localization accuracy and precision decreased. In a second step we tested whether a further border exists at 2.5 kHz as suggested by optimality models. When we compared striking behavior for a stimulus having energy from 2.5 to 5 kHz with a stimulus having energy between 1 and 2.5 kHz, no consistent differences in striking behavior were observed. It was further found that pre-takeoff latency was longer for the latter stimulus than for baseline and that center frequency was a better predictor for landing precision than stimulus bandwidth. These data fit well with what is known from head-turning studies and from neurophysiology.  相似文献   

9.
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl’s sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl’s localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.  相似文献   

10.

Background

When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl''s ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests.

Methodology/Principal Findings

HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes.

Conclusions/Significance

The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso–ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.  相似文献   

11.
Sound localization is a computational process that requires the central nervous system to measure various auditory cues and then associate particular cue values with appropriate locations in space. Behavioral experiments show that barn owls learn to associate values of cues with locations in space based on experience. The capacity for experience-driven changes in sound localization behavior is particularly great during a sensitive period that lasts until the approach of adulthood. Neurophysiological techniques have been used to determine underlying sites of plasticity in the auditory space-processing pathway. The external nucleus of the inferior colliculus (ICX), where a map of auditory space is synthesized, is a major site of plasticity. Experience during the sensitive period can cause large-scale, adaptive changes in the tuning of ICX neurons for sound localization cues. Large-scale physiological changes are accompanied by anatomical remodeling of afferent axons to the ICX. Changes in the tuning of ICX neurons for cue values involve two stages: (1) the instructed acquisition of neuronal responses to novel cue values and (2) the elimination of responses to inappropriate cue values. Newly acquired neuronal responses depend differentially on NMDA receptor currents for their expression. A model is presented that can account for this adaptive plasticity in terms of plausible cellular mechanisms. Accepted: 17 April 1999  相似文献   

12.
The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba), a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.  相似文献   

13.
The natural acoustical environment contains many reflective surfaces that give rise to echoes, complicating the task of sound localization and identification. The barn owl (Tyto alba), as a nocturnal predator, relies heavily on its auditory system for tracking and capturing prey in this highly echoic environment. The external nucleus of the owl's inferior colliculus (ICx) contains a retina-like map of space composed of space-specific auditory neurons that have spatially limited receptive fields. We recorded extracellularly from individual space-specific neurons in an attempt to understand the pattern of activity across the ICx in response to a brief direct sound and a simulated echo. Space-specific neurons responded strongly to the direct sound, but their response to a simulated echo was suppressed, typically, if the echo arrived within 5 ms or less of the direct sound. Thus we expect there to be little or no representation within the ICx of echoes arriving within such short delays.Behavioral tests using the owl's natural tendency to turn their head toward a sound source suggested that owls, like their space-specific neurons, similarly localize only the first of two brief sounds. Naive, untrained owls were presented with a pair of sounds in rapid succession from two horizontally-separated speakers. With interstimulus delays of less than 10 ms, the owl consistently turned its head toward the leading speaker. Longer delays elicited head turns to either speaker with approximately equal frequency and in some cases to both speakers sequentially.Abbreviations IC inferior colliculus - ICx external nucleus of the inferior colliculus - ITD interaural time difference - ISI interstimulus interval - LS left speaker - RS right speaker - CS centering speaker - RF receptive field  相似文献   

14.
SYNOPSIS. The detection of interaural time differences underliesazimuthal sound localization in the barn owl. Sensitivity tothese time differences arises in the brainstem nucleus laminaris.Auditory information reaches the nucleus laminaris via bilateralprojections from the cochlear nucleus magnocellularis. The magnocellularinputs to the nucleus laminaris act as delay lines to createmaps of interaural time differences. These delay lines are tappedby postsynaptic coincidence detectors that encode interauraltime differences. The entire circuit, from the auditory nerveto the nucleus magnocellularis to the nucleus laminaris, isspecialized for the encoding and preservation of temporal information.A mathematical model of this circuit (Grun et al., 1990) providesuseful predictions.  相似文献   

15.
The mechanisms underlying acoustic capacity to localize the sound source in horizontal plane were studied. The results obtained are discussed from the aspect of existing ideas of the mechanisms providing the localization acoustic capacities in natural way of stimulation in neuronal activity of the acoustic centres. The data obtained are also considered from the viewpoint of a possible considerable improvement of deaf people's spatial orientation with the aid of bilateral implanted cochlear implants.  相似文献   

16.
Summary Female treefrogs (Hyla cinerea andH. gratiosa) can accurately localize a sound source (playback of male mating calls) if both ears are intact. When the sensitivity of one eardrum is attenuated, by coating it with a thin layer of silicone grease, females no longer can locate the sound source. This study demonstrates that female anurans rely on interaural cues for localization of a calling male. The neural basis for an anuran's sound localization ability presumably involves binaural convergence on single cells in the central auditory nervous system.This work was supported by research grants from the National Science Foundation and the U.S. Public Health Service. The assistance of Anne J.M. Moffat in measuring the directional characteristics of the loudspeaker is gratefully appreciated.  相似文献   

17.
Zimmer U  Macaluso E 《Neuron》2005,47(6):893-905
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to investigate any modulation of auditory responses by binaural coherence. We assessed how processing of these cues depends on whether spatial information is task relevant and whether brain activity correlates with subjects' localization performance. We found that activity in Heschl's gyrus increased with increasing coherence, irrespective of whether localization was task relevant. Posterior auditory regions also showed increased activity for high coherence, primarily when sound localization was required and subjects successfully localized sounds. We conclude that binaural coherence cues are processed throughout the auditory cortex and that these cues are used in posterior regions for successful auditory localization.  相似文献   

18.
Frightening sound stimulation induced alarm and alertness which resulted in weakening of attention to novel environment and increasing of orienting response to the source of the frightening sound. Defense motivation occurring under these conditions failed to alter with the increase of sound loudness. Tranquilizers (diazepam, chlordiazepoxide, benatyzine), antidepressants (amytriptiline, imipramine) and some neuroleptics (trifluoperazine, haloperidol) in a low doze prevented these disturbances. High doses of pentobarbital, chlorpromazine, as well as trifluoperazine and haloperidol did not prevent the mentioned consequences of emotional excitation.  相似文献   

19.
In recent years, a great deal of research within the field of sound localization has been aimed at finding the acoustic cues that human listeners use to localize sounds and understanding the mechanisms by which they process these cues. In this paper, we propose a complementary approach by constructing an ideal-observer model, by which we mean a model that performs optimal information processing within a Bayesian context. The model considers all available spatial information contained within the acoustic signals encoded by each ear. Parameters for the optimal Bayesian model are determined based on psychoacoustic discrimination experiments on interaural time difference and sound intensity. Without regard as to how the human auditory system actually processes information, we examine the best possible localization performance that could be achieved based only on analysis of the input information, given the constraints of the normal auditory system. We show that the model performance is generally in good agreement with the actual human localization performance, as assessed in a meta-analysis of many localization experiments (Best et al. in Principles and applications of spatial hearing, pp 14–23. World Scientific Publishing, Singapore, 2011). We believe this approach can shed new light on the optimality (or otherwise) of human sound localization, especially with regard to the level of uncertainty in the input information. Moreover, the proposed model allows one to study the relative importance of various (combinations of) acoustic cues for spatial localization and enables a prediction of which cues are most informative and therefore likely to be used by humans in various circumstances.  相似文献   

20.
A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl''s midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl''s inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl''s inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号