首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.  相似文献   

2.
Locating sounds in realistic scenes is challenging because of distracting echoes and coarse spatial acoustic estimates. Fortunately, listeners can improve performance through several compensatory mechanisms. For instance, their brains perceptually suppress short latency (1-10 ms) echoes by constructing a representation of the acoustic environment in a process called the precedence effect. This remarkable ability depends on the spatial and spectral relationship between the first or precedent sound wave and subsequent echoes. In addition to using acoustics alone, the brain also improves sound localization by incorporating spatially precise visual information. Specifically, vision refines auditory spatial receptive fields and can capture auditory perception such that sound is localized toward a coincident visual stimulus. Although visual cues and the precedence effect are each known to improve performance independently, it is not clear whether these mechanisms can cooperate or interfere with each other. Here we demonstrate that echo suppression is enhanced when visual information spatially and temporally coincides with the precedent wave. Conversely, echo suppression is inhibited when vision coincides with the echo. These data show that echo suppression is a fundamentally multisensory process in everyday environments, where vision modulates even this largely automatic auditory mechanism to organize a coherent spatial experience.  相似文献   

3.
Using frequency-modulated echolocation sound, bat can capture a moving target in real three-dimensional (3-D) space. It is impossible to locate multiple targets in 3-D space by using only the delay time between an emission and the resultingechoes received at two points (i.e., two ears). To locate multiple targets in 3-D space requires directional information for each target. The spectrum of the echoes from nearly equidistant targets includes spectral components of both the interference between the echoes and the interference resulting from the physical process of reception at the external ear. The frequency of the spectral notch, which is the frequency corresponding to the minimum of the external ear's transfer function (EEDNF), provides a crucial cue for directional localization. In the model we present, a computational model todiscriminate multiple close targets in 3-D space utilizing echoes evoked by a single emission by distinguishing the interference of echoes from each object and the EEDNF corresponding to each target.  相似文献   

4.
Experiments examined differential coding of acoustic particle motion axis in the auditory midbrain of goldfish. Animals were exposed to vibratory stimuli varying in axis orientation as action potentials were recorded from single units in the central neuropil of nucleus centralis in the torus semicircularis. Response magnitudes as a function of stimulation axis were visualized in three dimensional plots called directional response profiles. These are generally comparable to directional responses observed among primary saccular afferents in having substantially vertical orientations. Distortions in shape from the peripheral patterns indicate neural information processing. A three-dimensional model was used to evaluate the hypothesis that responses in the auditory midbrain reflect the convergence of excitatory and inhibitory primary afferent-like responses. Model afferent inputs were generated and combined arithmetically. This analysis gives insight into the mechanisms of information processing that appear to occur in brainstem nuclei. The lack of diversity in best axis directions suggests that this mechanism alone cannot account for directional hearing abilities in this species. The roles that this directional representation and processing may play in directional hearing and sound source localization are not yet clear. Implications of these data on current models of fish directional hearing are discussed.  相似文献   

5.
In recent years, a great deal of research within the field of sound localization has been aimed at finding the acoustic cues that human listeners use to localize sounds and understanding the mechanisms by which they process these cues. In this paper, we propose a complementary approach by constructing an ideal-observer model, by which we mean a model that performs optimal information processing within a Bayesian context. The model considers all available spatial information contained within the acoustic signals encoded by each ear. Parameters for the optimal Bayesian model are determined based on psychoacoustic discrimination experiments on interaural time difference and sound intensity. Without regard as to how the human auditory system actually processes information, we examine the best possible localization performance that could be achieved based only on analysis of the input information, given the constraints of the normal auditory system. We show that the model performance is generally in good agreement with the actual human localization performance, as assessed in a meta-analysis of many localization experiments (Best et al. in Principles and applications of spatial hearing, pp 14–23. World Scientific Publishing, Singapore, 2011). We believe this approach can shed new light on the optimality (or otherwise) of human sound localization, especially with regard to the level of uncertainty in the input information. Moreover, the proposed model allows one to study the relative importance of various (combinations of) acoustic cues for spatial localization and enables a prediction of which cues are most informative and therefore likely to be used by humans in various circumstances.  相似文献   

6.
In nature, sounds from objects of interest arrive at the ears accompanied by sound waves from other actively emitting objects and by reflections off of nearby surfaces. Despite the fact that all of these waveforms sum at the eardrums, humans with normal hearing effortlessly segregate one sound source from another. Our laboratory is investigating the neural basis of this perceptual feat, often called the "cocktail party effect", using the barn owl as an animal model. The barn owl, renowned for its ability to localize sounds and its spatiotopic representation of auditory space, is an established model for spatial hearing. Here, we briefly review the neural basis of sound-localization of a single sound source in an anechoic environment and then generalize the ideas developed therein to cases in which there are multiple, concomitant sound sources and acoustical reflection.  相似文献   

7.
Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task.  相似文献   

8.
Human psychoacoustical studies have been the main sources of information from which the brain mechanisms of sound localization are inferred. The value of animal models would be limited, if humans and the animals did not share the same perceptual experience and the neural mechanisms for it. Barn owls and humans use the same method of computing interaural time differences for localization in the horizontal plane. The behavioral performance of owls and its neural bases are consistent with some of the theories developed for human sound localization. Neural theories of sound localization largely owe their origin to the study of sound localization by humans, even though little is known about the physiological properties of the human auditory system. One of these ideas is binaural cross-correlation which assumes that the human brain performs a process similar to mathematical cross-correlation to measure the interaural time difference for localization in the horizontal plane. The most complete set of neural evidence for this theory comes from the study of sound localization and its brain mechanisms in barn owls, although partial support is also available from studies on laboratory mammals. Animal models of human sensory perception make two implicit assumptions; animals and humans experience the same percept and the same neural mechanism underlies the creation of the percept. These assumptions are hard to prove for obvious reason. This article reviews several lines of evidence that similar neural mechanisms must underlie the perception of sound locations in humans and owls.  相似文献   

9.
Animals such as bats and dolphins exhibit impressive echolocation abilities in terms of ranging, resolution and imaging and therefore represent a valuable learning model for the study of spatial hearing and sound source localization leading to a better understanding of the hearing mechanism and further improvement of the existing localization strategies. This study aims to examine and understand the directional characteristics of a sonar receiver modeled upon the bat auditory system via measurements of the head-related transfer function (HRTF) in the horizontal plane. Four different models of the bat head were considered here and used to evaluate acoustic spectral characteristics of the sound received by the bat's ears – a sphere model, a sphere model with a pinna attached (two pinnae of different size were used in this study) and a bat-head cast. The performed HRTF measurements of the bat-head models were further analyzed and compared to identify monaural spectral localization cues in the horizontal plane defined by the bat's head and pinna shape and size. Our study suggests that the acoustical characteristics of a bio-inspired sonar head measured and specified in advance can potentially improve the performance of a receiver. Moreover, the generated auditory models may hold clues for the design of receiver characteristics in ultrasound imaging and navigation systems.  相似文献   

10.
The occipital cortex (OC) of early-blind humans is activated during various nonvisual perceptual and cognitive tasks, but little is known about its modular organization. Using functional MRI we tested whether processing of auditory versus tactile and spatial versus nonspatial information was dissociated in the OC of the early blind. No modality-specific OC activation was observed. However, the right middle occipital gyrus (MOG) showed a preference for spatial over nonspatial processing of both auditory and tactile stimuli. Furthermore, MOG activity was correlated with accuracy of individual sound localization performance. In sighted controls, most of extrastriate OC, including the MOG, was deactivated during auditory and tactile conditions, but the right MOG was more activated during spatial than nonspatial visual tasks. Thus, although the sensory modalities driving the neurons in the reorganized OC of blind individuals are altered, the functional specialization of extrastriate cortex is retained regardless of visual experience.  相似文献   

11.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.  相似文献   

12.

Background

Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal.

Methods

A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment.

Results

No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research.

Conclusion

The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual directional smelling in humans.  相似文献   

13.
Bats typically emit multi harmonic calls. Their head morphology shapes the emission and hearing sound fields as a function of frequency. Therefore, the sound fields are markedly different for the various harmonics. As the sound field provides bats with all necessary cues to locate objects in space, different harmonics might provide them with variable amounts of information about the location of objects. Also, the ability to locate objects in different parts of the frontal hemisphere might vary across harmonics. This paper evaluates this hypothesis in R. rouxi, using an information theoretic framework. We estimate the reflector position information transfer in the echolocation system of R. rouxi as a function of frequency. This analysis shows that localization performance reaches a global minimum and a global maximum at the two most energetic frequency components of R. rouxi call indicating tuning of morphology and harmonic structure. Using the fundamental the bat is able to locate objects in a large portion of the frontal hemisphere. In contrast, using the 1st overtone, it can only locate objects, albeit with a slightly higher accuracy, in a small portion of the frontal hemisphere by reducing sensitivity to echoes from outside this region of interest. Hence, different harmonic components provide the bat either with a wide view or a focused view of its environment. We propose these findings can be interpreted in the context of the foraging behaviour of R. rouxi, i.e., hunting in cluttered environments. Indeed, the focused view provided by the 1st overtone suggests that at this frequency its morphology is tuned for clutter rejection and accurate localization in a small region of interest while the finding that overall localization performance is best at the fundamental indicates that the morphology is simultaneously tuned to optimize overall localization performance at this frequency.  相似文献   

14.
The dorsal division of the cochlear nucleus (DCN) is the most complex of its subdivisions in terms of both anatomical organization and physiological response types. Hypotheses about the functional role of the DCN in hearing are as yet primitive, in part because the organizational complexity of the DCN has made development of a comprehensive and predictive model of its input-output processing difficult. The responses of DCN cells to complex stimuli, especially filtered noise, are interesting because they demonstrate properties that cannot be predicted, without further assumptions, from responses to narrow band stimuli, such as tones. In this paper, we discuss the functional organization of the DCN, i.e. the morphological organization of synaptic connections within the nucleus and the nature of synaptic interactions between its cells. We then discuss the responses of DCN principal cells to filtered noise stimuli that model the spectral sound localization cues produced by the pinna. These data imply that the DCN plays a role in interpreting sound localization cues; supporting evidence for such a role is discussed.  相似文献   

15.
Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.  相似文献   

16.
In humans and animals alike, the localization of sound constitutes a fundamental processing task of the auditory system. Directional hearing relies on acoustic cues such as the interaural amplitude and time differences and also, sometimes, the signal spectral composition. In small animals, such as insects, the auditory receptors are forcibly set close together, a design constraint imposing very short interaural distances. Due to the physics of sound propagation, the close proximity of the sound receivers results in vanishingly small amplitude and time cues. Yet, because of their directionality, small auditory systems embed original and innovative solutions that can be of inspirational value to some acute problems of technological miniaturization. Such ears are found in a parasitoid fly that acoustically locates its singing cricket host. Anatomically rather unconventional, the fly's auditory system is endowed with a directional sensitivity that is based on the mechanical coupling between its two hemilateral tympanal membranes. The functional principle permitting this directionality may be of particular relevance for technological applications necessitating sensors that are low cost, low weight, and low energy. Based on silicon-etching technology, early prototypes of sub-millimeter acoustic sensors provide evidence for directional mechanical responses. Further developments hold the promise of applications in hearing aid technology, vibration sensors, and miniature video-acoustic surveillance systems.  相似文献   

17.
Tettigoniids use hearing for mate finding and the avoidance of predators (mainly bats). Using intracellular recordings, we studied the response properties of auditory receptor cells of Neoconocephalus bivocatus to different sound frequencies, with a special focus on the frequency ranges representative of male calls and bat cries. We found several response properties that may represent adaptations for hearing in both contexts. Receptor cells with characteristic frequencies close to the dominant frequency of the communication signal were more broadly tuned, thus extending their range of high sensitivity. This increases the number of cells responding to the dominant frequency of the male call at low signal amplitudes, which should improve long distance call localization. Many cells tuned to audio frequencies had intermediate thresholds for ultrasound. As a consequence, a large number of receptors should be recruited at intermediate amplitudes of bat cries. This collective response of many receptors may function to emphasize predator information in the sensory system, and correlates with the amplitude range at which ultrasound elicits evasive behavior in tettigoniids. We compare our results with spectral processing in crickets, and discuss that both groups evolved different adaptations for the perceptual tasks of mate and predator detection.  相似文献   

18.
Auditory cortex: comparative aspects of maps and plasticity.   总被引:3,自引:0,他引:3  
Much recent work in the field of auditory cortex analysis consists of an intensified search for complex sound representation and sound localization mechanisms using tonotopic maps as a frame of reference. Mammalian species rely on parallel processing in multiple tonotopic and non-tonotopic maps but show different degrees of unit complexity, and orderly representation of acoustic dimensions in such maps depending on the predictability of sounds in their environment. Birds appear to rely chiefly on one tonotopic map which harbours multidimensional complex representations. During development and after partial hearing loss, tonotopic organization changes in a predictable manner. Learning also modifies the spatial representation of sounds and even modifies tonotopic organization, but the spatial rules involved in this process have not yet emerged.  相似文献   

19.
Due to its extended low-frequency hearing, the Mongolian gerbil (Meriones unguiculatus) has become a well-established animal model for human auditory processing. Here, two experiments are presented which quantify the gerbil’s sensitivity to amplitude modulation (AM) and carrier periodicity (CP) in broad-band stimuli. Two additional experiments investigate a possible interaction of the two types of periodicity. The results show that overall sensitivity to AM and CP is considerably less than in humans (by at least 10 dB). The gerbil’s amplitude-modulation sensitivity is almost independent of modulation frequency up to a modulation frequency of 1 kHz. Above, amplitude-modulation sensitivity deteriorates dramatically. On the basis of individual animals, carrier-periodicity detection may improve with increasing fundamental frequency up to about 500 Hz or may be independent of fundamental frequency. Amplitude-modulation thresholds are consistent with the hypothesis that intensity difference limens in the gerbil may be considerably worse than in humans, leading to the relative insensitivity for low modulation frequencies. Unlike in humans, inner-ear filtering appears not to limit amplitude-modulation sensitivity in the gerbil. Carrier-periodicity sensitivity changes with fundamental frequency similar to humans. Unlike in humans, there is no systematic interaction between AM and CP in the gerbil. This points to a relatively independent processing of the perceptual cues associated with AM and CP.  相似文献   

20.
Hearing protection devices (HPDs) such as earplugs offer to mitigate noise exposure and reduce the incidence of hearing loss among persons frequently exposed to intense sound. However, distortions of spatial acoustic information and reduced audibility of low-intensity sounds caused by many existing HPDs can make their use untenable in high-risk (e.g., military or law enforcement) environments where auditory situational awareness is imperative. Here we assessed (1) sound source localization accuracy using a head-turning paradigm, (2) speech-in-noise recognition using a modified version of the QuickSIN test, and (3) tone detection thresholds using a two-alternative forced-choice task. Subjects were 10 young normal-hearing males. Four different HPDs were tested (two active, two passive), including two new and previously untested devices. Relative to unoccluded (control) performance, all tested HPDs significantly degraded performance across tasks, although one active HPD slightly improved high-frequency tone detection thresholds and did not degrade speech recognition. Behavioral data were examined with respect to head-related transfer functions measured using a binaural manikin with and without tested HPDs in place. Data reinforce previous reports that HPDs significantly compromise a variety of auditory perceptual facilities, particularly sound localization due to distortions of high-frequency spectral cues that are important for the avoidance of front-back confusions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号