首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sound localization is a computational process that requires the central nervous system to measure various auditory cues and then associate particular cue values with appropriate locations in space. Behavioral experiments show that barn owls learn to associate values of cues with locations in space based on experience. The capacity for experience-driven changes in sound localization behavior is particularly great during a sensitive period that lasts until the approach of adulthood. Neurophysiological techniques have been used to determine underlying sites of plasticity in the auditory space-processing pathway. The external nucleus of the inferior colliculus (ICX), where a map of auditory space is synthesized, is a major site of plasticity. Experience during the sensitive period can cause large-scale, adaptive changes in the tuning of ICX neurons for sound localization cues. Large-scale physiological changes are accompanied by anatomical remodeling of afferent axons to the ICX. Changes in the tuning of ICX neurons for cue values involve two stages: (1) the instructed acquisition of neuronal responses to novel cue values and (2) the elimination of responses to inappropriate cue values. Newly acquired neuronal responses depend differentially on NMDA receptor currents for their expression. A model is presented that can account for this adaptive plasticity in terms of plausible cellular mechanisms. Accepted: 17 April 1999  相似文献   

2.
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual-auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.  相似文献   

3.
Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.  相似文献   

4.
Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged), the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life.  相似文献   

5.
Two potential sensory cues for sound location are interaural difference in response strength (firing rate and/or spike count) and in response latency of auditory receptor neurons. Previous experiments showed that these two cues are affected differently by intense prior stimulation; the difference in response strength declines and may even reverse in sign, but the difference in latency is unaffected. Here, I use an intense, constant tone to disrupt localization cues generated by a subsequent train of sound pulses. Recordings from the auditory nerve confirm that tone stimulation reduces, and sometimes reverses, the interaural difference in response strength to subsequent sound pulses, but that it enhances the interaural latency difference. If sound location is determined mainly from latency comparison, then behavioral responses to a pulse train following tone stimulation should be normal, but if the main cue for sound location is interaural difference in response strength, then post-tone behavioral responses should sometimes be misdirected. Initial phonotactic responses to the post-tone pulse train were frequently directed away from, rather than towards, the sound source, indicating that the dominant sensory cue for sound location is interaural difference in response strength.  相似文献   

6.
Sound localization behavior is of great importance for an animal's survival. To localize a sound, animals have to detect a sound source and assign a location to it. In this review we discuss recent results on the underlying mechanisms and on modulatory influences in the barn owl, an auditory specialist with very well developed capabilities to localize sound. Information processing in the barn owl auditory pathway underlying the computations of detection and localization is well understood. This analysis of the sensory information primarily determines the following orienting behavior towards the sound source. However, orienting behavior may be modulated by cognitive (top-down) influences such as attention. We show how advanced stimulation techniques can be used to determine the importance of different cues for sound localization in quasi-realistic stimulation situations, how attentional influences can improve the response to behaviorally relevant stimuli, and how attention can modulate related neural responses. Taken together, these data indicate how sound localization might function in the usually complex natural environment.  相似文献   

7.
Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.  相似文献   

8.
Accurate sound source localization in three-dimensional space is essential for an animal’s orientation and survival. While the horizontal position can be determined by interaural time and intensity differences, localization in elevation was thought to require external structures that modify sound before it reaches the tympanum. Here we show that in birds even without external structures like pinnae or feather ruffs, the simple shape of their head induces sound modifications that depend on the elevation of the source. Based on a model of localization errors, we show that these cues are sufficient to locate sounds in the vertical plane. These results suggest that the head of all birds induces acoustic cues for sound localization in the vertical plane, even in the absence of external ears.  相似文献   

9.
 The directionally sensitive acoustics of the pinnae enable humans to perceive the up–down and front–back direction of sound. This mechanism complements another, independent mechanism that derives sound-source azimuth from interaural difference cues. The pinnae effectively add direction-dependent spectral notches and peaks to the incoming sound, and it has been shown that such features are used to code sound direction in the median plane. However, it is still unclear which of the pinna-induced features play a role in sound localization. The present study presents a method for the reconstruction of the spatially relevant features in the spectral domain. Broadband sounds with random spectral shapes were presented in rapid succession as subjects made saccadic eye movements toward the perceived stimulus locations. The analysis, which is based on Bayesian statistics, indicates that specific spectral features could be associated with perceived spatial locations. Spectral features that were determined by this psychophysical method resemble the main characteristics of the pinna transfer functions obtained from acoustic measurements in the ear canal. Despite current experimental limitations, the approach may prove useful in the study of perceptually relevant spectral cues underlying human sound localization. Received: 2 December 2000 / Accepted in revised form: 23 October 2001  相似文献   

10.
Zimmer U  Macaluso E 《Neuron》2005,47(6):893-905
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to investigate any modulation of auditory responses by binaural coherence. We assessed how processing of these cues depends on whether spatial information is task relevant and whether brain activity correlates with subjects' localization performance. We found that activity in Heschl's gyrus increased with increasing coherence, irrespective of whether localization was task relevant. Posterior auditory regions also showed increased activity for high coherence, primarily when sound localization was required and subjects successfully localized sounds. We conclude that binaural coherence cues are processed throughout the auditory cortex and that these cues are used in posterior regions for successful auditory localization.  相似文献   

11.
Stereo or ‘3D’ vision is an important but costly process seen in several evolutionarily distinct lineages including primates, birds and insects. Many selective advantages could have led to the evolution of stereo vision, including range finding, camouflage breaking and estimation of object size. In this paper, we investigate the possibility that stereo vision enables praying mantises to estimate the size of prey by using a combination of disparity cues and angular size cues. We used a recently developed insect 3D cinema paradigm to present mantises with virtual prey having differing disparity and angular size cues. We predicted that if they were able to use these cues to gauge the absolute size of objects, we should see evidence for size constancy where they would strike preferentially at prey of a particular physical size, across a range of simulated distances. We found that mantises struck most often when disparity cues implied a prey distance of 2.5 cm; increasing the implied distance caused a significant reduction in the number of strikes. We, however, found no evidence for size constancy. There was a significant interaction effect of the simulated distance and angular size on the number of strikes made by the mantis but this was not in the direction predicted by size constancy. This indicates that mantises do not use their stereo vision to estimate object size. We conclude that other selective advantages, not size constancy, have driven the evolution of stereo vision in the praying mantis.This article is part of the themed issue ‘Vision in our three-dimensional world’.  相似文献   

12.
Locating sounds in realistic scenes is challenging because of distracting echoes and coarse spatial acoustic estimates. Fortunately, listeners can improve performance through several compensatory mechanisms. For instance, their brains perceptually suppress short latency (1-10 ms) echoes by constructing a representation of the acoustic environment in a process called the precedence effect. This remarkable ability depends on the spatial and spectral relationship between the first or precedent sound wave and subsequent echoes. In addition to using acoustics alone, the brain also improves sound localization by incorporating spatially precise visual information. Specifically, vision refines auditory spatial receptive fields and can capture auditory perception such that sound is localized toward a coincident visual stimulus. Although visual cues and the precedence effect are each known to improve performance independently, it is not clear whether these mechanisms can cooperate or interfere with each other. Here we demonstrate that echo suppression is enhanced when visual information spatially and temporally coincides with the precedent wave. Conversely, echo suppression is inhibited when vision coincides with the echo. These data show that echo suppression is a fundamentally multisensory process in everyday environments, where vision modulates even this largely automatic auditory mechanism to organize a coherent spatial experience.  相似文献   

13.

Background

Barn owls integrate spatial information across frequency channels to localize sounds in space.

Methodology/Principal Findings

We presented barn owls with synchronous sounds that contained different bands of frequencies (3–5 kHz and 7–9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.

Conclusions/Significance

We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.  相似文献   

14.
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl’s sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl’s localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.  相似文献   

15.
In recent years, a great deal of research within the field of sound localization has been aimed at finding the acoustic cues that human listeners use to localize sounds and understanding the mechanisms by which they process these cues. In this paper, we propose a complementary approach by constructing an ideal-observer model, by which we mean a model that performs optimal information processing within a Bayesian context. The model considers all available spatial information contained within the acoustic signals encoded by each ear. Parameters for the optimal Bayesian model are determined based on psychoacoustic discrimination experiments on interaural time difference and sound intensity. Without regard as to how the human auditory system actually processes information, we examine the best possible localization performance that could be achieved based only on analysis of the input information, given the constraints of the normal auditory system. We show that the model performance is generally in good agreement with the actual human localization performance, as assessed in a meta-analysis of many localization experiments (Best et al. in Principles and applications of spatial hearing, pp 14–23. World Scientific Publishing, Singapore, 2011). We believe this approach can shed new light on the optimality (or otherwise) of human sound localization, especially with regard to the level of uncertainty in the input information. Moreover, the proposed model allows one to study the relative importance of various (combinations of) acoustic cues for spatial localization and enables a prediction of which cues are most informative and therefore likely to be used by humans in various circumstances.  相似文献   

16.
The ventriloquist effect results from near-optimal bimodal integration   总被引:10,自引:0,他引:10  
Ventriloquism is the ancient art of making one's voice appear to come from elsewhere, an art exploited by the Greek and Roman oracles, and possibly earlier. We regularly experience the effect when watching television and movies, where the voices seem to emanate from the actors' lips rather than from the actual sound source. Originally, ventriloquism was explained by performers projecting sound to their puppets by special techniques, but more recently it is assumed that ventriloquism results from vision "capturing" sound. In this study we investigate spatial localization of audio-visual stimuli. When visual localization is good, vision does indeed dominate and capture sound. However, for severely blurred visual stimuli (that are poorly localized), the reverse holds: sound captures vision. For less blurred stimuli, neither sense dominates and perception follows the mean position. Precision of bimodal localization is usually better than either the visual or the auditory unimodal presentation. All the results are well explained not by one sense capturing the other, but by a simple model of optimal combination of visual and auditory information.  相似文献   

17.
Rhinolophidae or Horseshoe bats emit long and narrowband calls. Fluttering insect prey generates echoes in which amplitude and frequency shifts are present, i.e. glints. These glints are reliable cues about the presence of prey and also encode certain properties of the prey. In this paper, we propose that these glints, i.e. the dominant glints, are also reliable signals upon which to base prey localization. In contrast to the spectral cues used by many other bats, the localization cues in Rhinolophidae are most likely provided by self-induced amplitude modulations generated by pinnae movement. Amplitude variations in the echo not introduced by the moving pinnae can be considered as noise interfering with the localization process. The amplitude of the dominant glints is very stable. Therefore, these parts of the echoes contain very little noise. However, using only the dominant glints potentially comes at a cost. Depending on the flutter rate of the insect, a limited number of dominant glints will be present in each echo giving the bat a limited number of sample points on which to base localization. We evaluate the feasibility of a strategy under which Rhinolophidae use only dominant glints. We use a computational model of the echolocation task faced by Rhinolophidae. Our model includes the spatial filtering of the echoes by the morphology of the sonar apparatus of Rhinolophus rouxii as well as the amplitude modulations introduced by pinnae movements. Using this model, we evaluate whether the dominant glints provide Rhinolophidae with enough information to perform localization. Our simulations show that Rhinolophidae can use dominant glints in the echoes as carriers for self-induced amplitude modulations serving as localization cues. In particular, it is shown that the reduction in noise achieved by using only the dominant glints outweighs the information loss that occurs by sampling the echo.  相似文献   

18.
Traditionally, the medial superior olive, a mammalian auditory brainstem structure, is considered to encode interaural time differences, the main cue for localizing low-frequency sounds. Detection of binaural excitatory and inhibitory inputs are considered as an underlying mechanism. Most small mammals, however, hear high frequencies well beyond 50 kHz and have small interaural distances. Therefore, they can not use interaural time differences for sound localization and yet possess a medial superior olive. Physiological studies in bats revealed that medial superior olive cells show similar interaural time difference coding as in larger mammals tuned to low-frequency hearing. Their interaural time difference sensitivity, however, is far too coarse to serve in sound localization. Thus, interaural time difference sensitivity in medial superior olive of small mammals is an epiphenomenon. We propose that the original function of the medial superior olive is a binaural cooperation causing facilitation due to binaural excitation. Lagging inhibitory inputs, however, suppress reverberations and echoes from the acoustic background. Thereby, generation of antagonistically organized temporal fields is the basic and original function of the mammalian medial superior olive. Only later in evolution with the advent of larger mammals did interaural distances, and hence interaural time differences, became large enough to be used as cues for sound localization of low-frequency stimuli. Accepted: 28 February 2000  相似文献   

19.
When correlation implies causation in multisensory integration   总被引:1,自引:0,他引:1  
Inferring which signals have a common underlying cause, and hence should be integrated, represents a primary challenge for a perceptual system dealing with multiple sensory inputs [1-3]. This challenge is often referred to as the correspondence problem or causal inference. Previous research has demonstrated that spatiotemporal cues, along with prior knowledge, are exploited by the human brain to solve this problem [4-9]. Here we explore the role of correlation between the fine temporal structure of auditory and visual signals in causal inference. Specifically, we investigated whether correlated signals are inferred to originate from the same distal event and hence are integrated optimally [10]. In a localization task with visual, auditory, and combined audiovisual targets, the improvement in precision for combined relative to unimodal targets was statistically optimal only when audiovisual signals were correlated. This result demonstrates that humans use the similarity in the temporal structure of multisensory signals to solve the correspondence problem, hence inferring causation from correlation.  相似文献   

20.
Perception of signals modeling directed movement of a sound source by three groups of patients with (1) temporal epilepsy, (2) epileptic foci in the frontal region, and (3) the epileptic syndrome due to local organic lesions in the temporal or frontal lobes was studied. It was established that the features and degree of spatial (binaural) hearing disorders in temporal epilepsy were determined not only by the localization and the extent of a lesion in the temporal lobe, but also by the areas beyond it that were involved in the epileptic process. Patients with organic lesions (tumors, cysts) involving the temporal lobe cortex may reveal more severe spatial hearing disorders than temporal epilepsy patients with the same localization of the foci of convulsive activity. A relatively isolated lesion of the frontal region cortex does not influence the assessment of the parameters of moving sound signals used. Possible neurophysiological mechanisms underlying the found spatial hearing disorders as well as the possibility of using the results obtained to solve the problems of differential diagnosis are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号