首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Even in the absence of external stimulation, the cochleas of most humans emit very faint sounds below the threshold of hearing, sounds that are known as spontaneous otoacoustic emissions. They are a signature of the active amplification mechanism in the cochlea. Emissions occur at frequencies that are unique for an individual and change little over time. The statistics of a population of ears exhibit characteristic features such as a preferred relative frequency distance between emissions (interemission intervals). We propose a simplified cochlea model comprising an array of active nonlinear oscillators coupled both hydrodynamically and viscoelastically. The oscillators are subject to a weak spatial disorder that lends individuality to the simulated cochlea. Our model captures basic statistical features of the emissions: distributions of 1), emission frequencies; 2), number of emissions per ear; and 3), interemission intervals. In addition, the model reproduces systematic changes of the interemission intervals with frequency. We show that the mechanism for the preferred interemission interval in our model is the occurrence of synchronized clusters of oscillators.  相似文献   

2.
Even in the absence of external stimulation, the cochleas of most humans emit very faint sounds below the threshold of hearing, sounds that are known as spontaneous otoacoustic emissions. They are a signature of the active amplification mechanism in the cochlea. Emissions occur at frequencies that are unique for an individual and change little over time. The statistics of a population of ears exhibit characteristic features such as a preferred relative frequency distance between emissions (interemission intervals). We propose a simplified cochlea model comprising an array of active nonlinear oscillators coupled both hydrodynamically and viscoelastically. The oscillators are subject to a weak spatial disorder that lends individuality to the simulated cochlea. Our model captures basic statistical features of the emissions: distributions of 1), emission frequencies; 2), number of emissions per ear; and 3), interemission intervals. In addition, the model reproduces systematic changes of the interemission intervals with frequency. We show that the mechanism for the preferred interemission interval in our model is the occurrence of synchronized clusters of oscillators.  相似文献   

3.
Vilfan A  Duke T 《Biophysical journal》2008,95(10):4622-4630
Spontaneous otoacoustic emissions (SOAEs) are indicators of an active process in the inner ear that enhances the sensitivity and frequency selectivity of hearing. They are particularly regular and robust in certain lizards, so these animals are good model organisms for studying how SOAEs are generated. We show that the published properties of SOAEs in the bobtail lizard are wholly consistent with a mathematical model in which active oscillators, with exponentially varying characteristic frequencies, are coupled together in a chain by visco-elastic elements. Physically, each oscillator corresponds to a small group of hair cells, covered by a tectorial sallet, so our theoretical analysis directly links SOAEs to the micromechanics of active hair bundles.  相似文献   

4.
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.  相似文献   

5.
The physiological roots of music perception are a matter of long-lasting debate. Recently light on this problem has been shed by the study of otoacoustic emissions (OAEs), which are weak sounds generated by the inner ear following acoustic stimulation and, sometimes, even spontaneously. In the present study, a high-resolution time-frequency method called matching pursuit was applied to the OAEs recorded from the ears of 45 normal volunteers so that the component frequencies, amplitudes, latencies, and time-spans could be accurately determined. The method allowed us to find that, for each ear, the OAEs consisted of characteristic frequency patterns that we call resonant modes. Here we demonstrate that, on average, the frequency ratios of the resonant modes from all the cochleas studied possessed small integer ratios. The ratios are the same as those found by Pythagoras as being most musically pleasant and which form the basis of the Just tuning system. The statistical significance of the results was verified against a random distribution of ratios. As an explanatory model, there are attractive features in a recent theory that represents the cochlea as a surface acoustic wave resonator; in this situation the spacing between the rows of hearing receptors can create resonant cavities of defined lengths. By adjusting the geometry and the lengths of the resonant cavities, it is possible to generate the preferred frequency ratios we have found here. We conclude that musical perception might be related to specific geometrical and physiological properties of the cochlea.  相似文献   

6.
The tectorial membrane (TM) is widely believed to play an important role in determining the ear's ability to detect and resolve incoming acoustic information. While it is still unclear precisely what that role is, the TM has been hypothesized to help overcome viscous forces and thereby sharpen mechanical tuning of the sensory cells. Lizards present a unique opportunity to further study the role of the TM given the diverse inner-ear morphological differences across species. Furthermore, stimulus-frequency otoacoustic emissions (SFOAEs), sounds emitted by the ear in response to a tone, noninvasively probe the frequency selectivity of the ear. We report estimates of auditory tuning derived from SFOAEs for 12 different species of lizards with widely varying TM morphology. Despite gross anatomical differences across the species examined herein, low-level SFOAEs were readily measurable in all ears tested, even in non-TM species whose basilar papilla contained as few as 50-60 hair cells. Our measurements generally support theoretical predictions: longer delays/sharper tuning features are found in species with a TM relative to those without. However, SFOAEs from at least one non-TM species (Anolis) with long delays suggest there are likely additional micromechanical factors at play that can directly affect tuning. Additionally, in the one species examined with a continuous TM (Aspidoscelis) where cell-to-cell coupling is presumably relatively stronger, delays were intermediate. This observation appears consistent with recent reports that suggest the TM may play a more complex macromechanical role in the mammalian cochlea via longitudinal energy distribution (and thereby affect tuning). Although significant differences exist between reptilian and mammalian auditory biophysics, understanding lizard OAE generation mechanisms yields significant insight into fundamental principles at work in all vertebrate ears.  相似文献   

7.
What did Morganucodon hear?   总被引:1,自引:0,他引:1  
The structure of the middle and inner ear of Morganucodon , one of the oldest known mammals, is reviewed and compared to the structure of the ears of extant mammals, reptiles and birds with known auditory capabilities. Specifically, allometric relationships between ear dimensions (basilar-membrane length, tympanic-membrane area and stapes-footplate area) and specific features of the audiogram are defined in extant ears. These relationships are then used to make several predictions of auditory function in Morganucodon. The results point out that the ear structures of Morganucodon–Art similar in dimensions to ear structures in both extant small mammals–with predominantly high-frequency (10 kHz) auditory capabilities, and reptiles and birds- with better low and middle-frequency hearing (< 5 kHz). Although the allometric analysis cannot by itself determine whether Morganucodon heard more like present-day small mammals, or birds and reptiles, the apparent stiffness of the Morganucodon middle ear is both more consistent with the high-frequency mammalian middle ear and would act to decrease the sensitivity of a bird-reptile middle ear to low-frequency sound. Several likely hearing scenarios for Morganucodon are defined, including a scenario in which these animals had ears like those of modern small mammals that are selectively sensitive to high-frequency sounds, and a second scenario in which the Morganucodon ear was moderately sensitive to sounds of a narrow middle-frequency range (5–7 kHz) and relatively insensitive to sounds of higher or lower frequency. The evidence needed to substantiate either scenario includes some objective measure of the stiffness of the Morganucodon ossicular system, while a key datum needed to distinguish between the two hypotheses includes confirmation of the presence or absence of a cochlear lamina in the Morganucodon inner ear.  相似文献   

8.
徐立  吕建忠 《生理学报》1991,43(3):306-310
用不同频率的短纯音骨导刺激,在7名(14耳)听力正常受试者同时记录双耳声诱发耳声发射(EOAE)。此法比单耳轮流记录省时一半。研究结果表明,EOAE 为一种窄带声,其中心频率随刺激声频率增高而增高,提示 EOAE 产生部位在接受刺激声频率对应的耳蜗部位附近。EOAE 的潜伏期与刺激强度无明显关系,但有随刺激声频率增高而变短的趋势,可能与不同频率刺激声诱发的 EOAE 在基底膜上产生的部位与鼓膜之间的距离不等有关。除1耳用4.0kHz 外,用1.0,2.0,3.0和4.0kHz 短纯音刺激在14耳全可记录到 EOAE,0.5kHz和6.0kHz 则分别在10耳和7耳记录到 EOAE。0.5—6.0kHz 短纯音诱发的 EOAE 的阈值均值连线所得的声发射耳蜗图上可见,1.0kHz 处阈值最低,而在这些受试者所测得的中耳共振频率平均值为1100±230Hz,推测1.0kHz EOAE 阈值最低与中耳的传导函数有关。本文描述的骨导双耳同时记录 EOAE 并描记声发射耳蜗图的方法可用于临床的听力客观评价。  相似文献   

9.

Background

The hearing of tetrapods including humans is enhanced by an active process that amplifies the mechanical inputs associated with sound, sharpens frequency selectivity, and compresses the range of responsiveness. The most striking manifestation of the active process is spontaneous otoacoustic emission, the unprovoked emergence of sound from an ear. Hair cells, the sensory receptors of the inner ear, are known to provide the energy for such emissions; it is unclear, though, how ensembles of such cells collude to power observable emissions.

Methodology and Principal Findings

We have measured and modeled spontaneous otoacoustic emissions from the ear of the tokay gecko, a convenient experimental subject that produces robust emissions. Using a van der Pol formulation to represent each cluster of hair cells within a tonotopic array, we have examined the factors that influence the cooperative interaction between oscillators.

Conclusions and Significance

A model that includes viscous interactions between adjacent hair cells fails to produce emissions similar to those observed experimentally. In contrast, elastic coupling yields realistic results, especially if the oscillators near the ends of the array are weakened so as to minimize boundary effects. Introducing stochastic irregularity in the strength of oscillators stabilizes peaks in the spectrum of modeled emissions, further increasing the similarity to the responses of actual ears. Finally, and again in agreement with experimental findings, the inclusion of a pure-tone external stimulus repels the spectral peaks of spontaneous emissions. Our results suggest that elastic coupling between oscillators of slightly differing strength explains several properties of the spontaneous otoacoustic emissions in the gecko.  相似文献   

10.
Zimmer U  Macaluso E 《Neuron》2005,47(6):893-905
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to investigate any modulation of auditory responses by binaural coherence. We assessed how processing of these cues depends on whether spatial information is task relevant and whether brain activity correlates with subjects' localization performance. We found that activity in Heschl's gyrus increased with increasing coherence, irrespective of whether localization was task relevant. Posterior auditory regions also showed increased activity for high coherence, primarily when sound localization was required and subjects successfully localized sounds. We conclude that binaural coherence cues are processed throughout the auditory cortex and that these cues are used in posterior regions for successful auditory localization.  相似文献   

11.
Perception of movement in acoustic space depends on comparison of the sound waveforms reaching the two ears (binaural cues) as well as spectrotemporal analysis of the waveform at each ear (monaural cues). The relative importance of these two cues is different for perception of vertical or horizontal motion, with spectrotemporal analysis likely to be more important for perceiving vertical shifts. In humans, functional imaging studies have shown that sound movement in the horizontal plane activates brain areas distinct from the primary auditory cortex, in parietal and frontal lobes and in the planum temporale. However, no previous work has examined activations for vertical sound movement. It is therefore difficult to generalize previous imaging studies, based on horizontal movement only, to multidimensional auditory space perception. Using externalized virtual-space sounds in a functional magnetic resonance imaging (fMRI) paradigm to investigate this, we compared vertical and horizontal shifts in sound location. A common bilateral network of brain areas was activated in response to both horizontal and vertical sound movement. This included the planum temporale, superior parietal cortex, and premotor cortex. Sounds perceived laterally in virtual space were associated with contralateral activation of the auditory cortex. These results demonstrate that sound movement in vertical and horizontal dimensions engages a common processing network in the human cerebral cortex and show that multidimensional spatial properties of sounds are processed at this level.  相似文献   

12.
Apart from detecting sounds, vertebrate ears occasionally produce sounds. These spontaneous otoacoustic emissions are the most compelling evidence for the existence of the cochlear amplifier, an active force-generating process within the cochlea that resides in the motility of the hair cells. Insects have neither a cochlea nor hair cells, yet recent studies demonstrate that an active process that is equivalent to the cochlear amplifier occurs in at least some insect ears; like hair cells, the chordotonal sensory neurons that mediate hearing in Drosophila actively generate forces that augment the minute vibrations they transduce. This neuron-based force-generation, its impact on the ear's macroscopic performance, and the underlying molecular mechanism are the topics of this article, which summarizes some of the recent findings on how the Drosophila organ of hearing works. Functional parallels with vertebrate auditory systems are described that recommend the fly for the study of fundamental processes in hearing.  相似文献   

13.
Vertebrates inhabit and communicate acoustically in most natural environments. We review the influence of environmental factors on the hearing sensitivity of terrestrial vertebrates, and on the anatomy and mechanics of the middle ears. Evidence suggests that both biotic and abiotic environmental factors affect the evolution of bandwidth and frequency of peak sensitivity of the hearing spectrum. Relevant abiotic factors include medium type, temperature, and noise produced by nonliving sources. Biotic factors include heterospecific, conspecific, or self-produced sounds that animals are selected to recognize, and acoustic interference by sounds that other animals generate. Within each class of tetrapods, the size of the middle ear structures correlates directly to body size and inversely to frequency of peak sensitivity. Adaptation to the underwater medium in cetaceans involved reorganization of the middle ear for novel acoustic pathways, whereas adaptation to subterranean life in several mammals resulted in hypertrophy of the middle ear ossicles to enhance their inertial mass for detection of seismic vibrations. The comparative approach has revealed a number of generalities about the effect of environmental factors on hearing performance and middle ear structure across species. The current taxonomic sampling of the major tetrapod groups is still highly unbalanced and incomplete. Future expansion of the comparative evidence should continue to reveal general patterns and novel mechanisms.  相似文献   

14.
Sensitive hearing organs often employ nonlinear mechanical sound processing which generates distortion-product otoacoustic emissions (DPOAE). Such emissions are also recordable from tympanal organs of insects. In vertebrates (including humans), otoacoustic emissions are considered by-products of active sound amplification through specialized sensory receptor cells in the inner ear. Force generated by these cells primarily augments the displacement amplitude of the basilar membrane and thus increases auditory sensitivity. As in vertebrates, the emissions from insect ears are based on nonlinear mechanical properties of the sense organ. Apparently, to achieve maximum sensitivity, convergent evolutionary principles have been realized in the micromechanics of these hearing organs-although vertebrates and insects possess quite different types of receptor cells in their ears. Just as in vertebrates, otoacoustic emissions from insects ears are vulnerable and depend on an intact metabolism, but so far in tympanal organs, it is not clear if auditory nonlinearity is achieved by active motility of the sensory neurons or if passive cellular characteristics cause the nonlinear behavior. In the antennal ears of flies and mosquitoes, however, active vibrations of the flagellum have been demonstrated. Our review concentrates on experiments studying the tympanal organs of grasshoppers and moths; we show that their otoacoustic emissions are produced in a frequency-specific way and can be modified by electrical stimulation of the sensory cells. Even the simple ears of notodontid moths produce distinct emissions, although they have just one auditory neuron. At present it is still uncertain, both in vertebrates and in insects, if the nonlinear amplification so essential for sensitive sound processing is primarily due to motility of the somata of specialized sensory cells or to active movement of their (stereo-)cilia. We anticipate that further experiments with the relatively simple ears of insects will help answer these questions.  相似文献   

15.
We are constantly exposed to a mixture of sounds of which only few are important to consider. In order to improve detectability and to segregate important sounds from less important sounds, the auditory system uses different aspects of natural sound sources. Among these are (a) its specific location and (b) synchronous envelope fluctuations in different frequency regions. Such a comodulation of different frequency bands facilitates the detection of tones in noise, a phenomenon known as comodulation masking release (CMR). Physiological as well as psychoacoustical studies usually investigate only one of these strategies to segregate sounds. Here we present psychoacoustical data on CMR for various virtual locations of the signal by varying its interaural phase difference (IPD). The results indicate that the masking release in conditions with binaural (interaural phase differences) and across-frequency (synchronous envelope fluctuations, i.e. comodulation) cues present is equal to the sum of the masking releases for each of the cues separately. Data and model predictions with a simplified model of the auditory system indicate an independent and serial processing of binaural cues and monaural across-frequency cues, maximizing the benefits from the envelope comparison across frequency and the comparison of fine structure across ears.
Bastian EppEmail:
  相似文献   

16.
In response to a sound stimulus, the inner ear emits sounds called otoacoustic emissions. While the exact mechanism for the production of otoacoustic emissions is not known, active motion of individual hair cells is thought to play a role. Two possible sources for otoacoustic emissions, both localized within individual hair cells, include somatic motility and hair bundle motility. Because physiological models of each of these systems are thought to be poised near a Hopf bifurcation, the dynamics of each can be described by the normal form for a system near a Hopf bifurcation. Here we demonstrate that experimental results from three-frequency suppression experiments can be predicted based on the response of an array of noninteracting Hopf oscillators tuned at different frequencies. This supports the idea that active motion of individual hair cells contributes to active processing of sounds in the ear. Interestingly, the model suggests an explanation for differing results recorded in mammals and nonmammals.  相似文献   

17.
The high sensitivity and effective frequency discrimination of sound detection performed by the auditory system rely on the dynamics of a system of hair cells. In the inner ear, these acoustic receptors are primarily attached to an overlying structure that provides mechanical coupling between the hair bundles. Although the dynamics of individual hair bundles has been extensively investigated, the influence of mechanical coupling on the motility of the system of bundles remains underdetermined. We developed a technique of mechanically coupling two active hair bundles, enabling us to probe the dynamics of the coupled system experimentally. We demonstrated that the coupling could enhance the coherence of hair bundles’ spontaneous oscillation, as well as their phase-locked response to sinusoidal stimuli, at the calcium concentration in the surrounding fluid near the physiological level. The empirical data were consistent with numerical results from a model of two coupled nonisochronous oscillators, each displaying a supercritical Hopf bifurcation. The model revealed that a weak coupling can poise the system of unstable oscillators closer to the bifurcation by a shift in the critical point. In addition, the dynamics of strongly coupled oscillators far from criticality suggested that individual hair bundles may be regarded as nonisochronous oscillators. An optimal degree of nonisochronicity was required for the observed tuning behavior in the coherence of autonomous motion of the coupled system.  相似文献   

18.
Tympanal ears of female gypsy moths Lymantria dispar dispar (L.) (Lepidoptera: Erebidae: Lymantriinae) are reportedly more sensitive than ears of conspecific males to sounds below 20 kHz. The hypothesis is tested that this differential sensitivity is a result of sex‐specific functional roles of sound during sexual communication, with males sending and females receiving acoustic signals. Analyses of sounds produced by flying males reveal a 33‐Hz wing beat frequency and 14‐kHz associated clicks, which remain unchanged in the presence of female sex pheromone. Females exposed to playback sounds of flying conspecific males respond with wing raising, fluttering and walking, generating distinctive visual signals that may be utilized by mate‐seeking males at close range. By contrast, females exposed to playback sounds of flying heterospecific males (Lymantria fumida Butler) do not exhibit the above behavioural responses. Laser Doppler vibrometry reveals that female tympana are particularly sensitive to frequencies in the range produced by flying conspecific males, including the 33‐Hz wing beat frequency, as well as the 7‐kHz fundamental frequency and 14‐kHz dominant frequency of associated clicks. These results support the hypothesis that the female L. dispar ear is tuned to sounds of flying conspecific males. Based on previous findings and the data of the present study, sexual communication in L. dispar appears to proceed as: (i) females emitting sex pheromone that attracts males; (ii) males flying toward calling females; and (iii) sound signals from flying males at close range inducing movement in females, which, in turn, provides visual signals that could orient males toward females.  相似文献   

19.
20.
The auditory sensory organ, the cochlea, not only detects but also generates sounds. Such sounds, otoacoustic emissions, are widely used for diagnosis of hearing disorders and to estimate cochlear nonlinearity. However, the fundamental question of how the otoacoustic emission exits the cochlea remains unanswered. In this study, emissions were provoked by two tones with a constant frequency ratio, and measured as vibrations at the basilar membrane and at the stapes, and as sound pressure in the ear canal. The propagation direction and delay of the emission were determined by measuring the phase difference between basilar membrane and stapes vibrations. These measurements show that cochlea-generated sound arrives at the stapes earlier than at the measured basilar membrane location. Data also show that basilar membrane vibration at the emission frequency is similar to that evoked by external tones. These results conflict with the backward-traveling-wave theory and suggest that at low and intermediate sound levels, the emission exits the cochlea predominantly through the cochlear fluids.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号