首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this paper, we describe domain-general auditory processes that we believe are prerequisite to the linguistic analysis of speech. We discuss biological evidence for these processes and how they might relate to processes that are specific to human speech and language. We begin with a brief review of (i) the anatomy of the auditory system and (ii) the essential properties of speech sounds. Section 4 describes the general auditory mechanisms that we believe are applied to all communication sounds, and how functional neuroimaging is being used to map the brain networks associated with domain-general auditory processing. Section 5 discusses recent neuroimaging studies that explore where such general processes give way to those that are specific to human speech and language.  相似文献   

3.
The coding of complex sounds in the early auditory system has a 'standard model' based on the known physiology of the cochlea and main brainstem pathways. This model accounts for a wide range of perceptual capabilities. It is generally accepted that high cortical areas encode abstract qualities such as spatial location or speech sound identity. Between the early and late auditory system, the role of primary auditory cortex (A1) is still debated. A1 is clearly much more than a 'whiteboard' of acoustic information-neurons in A1 have complex response properties, showing sensitivity to both low-level and high-level features of sounds.  相似文献   

4.
5.
Songs mediate mate attraction and territorial defence in songbirds during the breeding season. Outside of the breeding season, the avian vocal repertoire often includes calls that function in foraging, antipredator and social behaviours. Songs and calls can differ substantially in their spectral and temporal content. Given seasonal variation in the vocal signals, the sender–receiver matching hypothesis predicts seasonal changes in auditory processing that match the physical properties of songs during the breeding season and calls outside of it. We tested this hypothesis in white-breasted nuthatches, Sitta carolinensis, tufted titmice, Baeolophus bicolor, and Carolina chickadees, Poecile carolinensis. We measured the envelope-following response (EFR), which quantifies phase locking to the amplitude envelope, and the frequency-following response (FFR), which quantifies phase locking to the temporal fine structure of sounds. Because songs and calls of nuthatches are amplitude modulated at different rates, we predicted seasonal changes in EFRs that match the rates of amplitude fluctuation in songs and calls. In chickadees and titmice, we predicted stronger FFRs during the spring and stronger EFRs during the winter because songs are tonal and calls include amplitude-modulated elements. In all three species, we found seasonal changes in EFRs and FFRs. EFRs varied across seasons and matched the amplitude modulations of songs and calls in nuthatches. In addition, female chickadees had stronger EFRs in the winter than in the spring. In all three species, FFRs during the spring tended to be stronger in females than in males. We also found species differences in EFRs and FFRs in both seasons; EFRs and FFRs tended to be higher in nuthatches than in chickadees and titmice. We discuss the potential mechanisms underlying seasonality in EFRs and FFRs and the implications of our results for communication during the breeding season and outside of it, when these three species form mixed-species flocks.  相似文献   

6.
7.
Honey bee foragers use a "waggle dance" to inform nestmates about direction and distance to locations of attractive food. The sound and air flows generated by dancer's wing and abdominal vibrations have been implicated as important cues, but the decoding mechanisms for these dance messages are poorly understood. To understand the neural mechanisms of honey bee dance communication, we analyzed the anatomy of antenna and Johnston's organ (JO) in the pedicel of the antenna, as well as the mechanical and neural response characteristics of antenna and JO to acoustic stimuli, respectively. The honey bee JO consists of about 300-320 scolopidia connected with about 48 cuticular "knobs" around the circumference of the pedicel. Each scolopidium contains bipolar sensory neurons with both type I and II cilia. The mechanical sensitivities of the antennal flagellum are specifically high in response to low but not high intensity stimuli of 265-350 Hz frequencies. The structural characteristics of antenna but not JO neurons seem to be responsible for the non-linear responses of the flagellum in contrast to mosquito and fruit fly. The honey bee flagellum is a sensitive movement detector responding to 20 nm tip displacement, which is comparable to female mosquito. Furthermore, the JO neurons have the ability to preserve both frequency and temporal information of acoustic stimuli including the "waggle dance" sound. Intriguingly, the response of JO neurons was found to be age-dependent, demonstrating that the dance communication is only possible between aged foragers. These results suggest that the matured honey bee antennae and JO neurons are best tuned to detect 250-300 Hz sound generated during "waggle dance" from the distance in a dark hive, and that sufficient responses of the JO neurons are obtained by reducing the mechanical sensitivity of the flagellum in a near-field of dancer. This nonlinear effect brings about dynamic range compression in the honey bee auditory system.  相似文献   

8.
The past year has seen significant advances in our understanding of the structural (circuitry and chemistry of synaptic connections) and functional characteristics of the auditory brainstem. Some of the findings that shed light on the mechanisms underlying complex auditory information processing are highlighted.  相似文献   

9.
10.
Summary Scaphiopus couchi is a primitive anuran whose vocal repertoire consists of a mating call and a release call. The two calls are distinct and differ in trill rate. Reception of airborne sound is achieved by means of a poorly differentiated region of skin on the head which serves as an eardrum.Whereas more modern anurans possessthree distinct types of auditory nerve fibers, spadefoot toads possess onlytwo types: a low-frequency-sensitive group which exhibits tone-on-tone inhibition and a high-frequency-sensitive group which is not inhibitable. The sharpness of frequency tuning of primary fibers in each group is comparable to more advanced vertebrate species. While the response properties of auditory fibers in the high-frequency-sensitive group are well matched to the spectral and temporal features in the spadefoot's mating call and release call, the low-frequency-sensitive fibers do not respond to these calls. Instead they may be involved in detection of bodily transmitted sounds during clasping, as well as other low-frequency sounds in the environment. The two groups of auditory fibers probably derive from separate auditory organs within the inner ear. Thresholds of auditory nerve fibers in spadefoot toads are relatively poorer than in more advanced anurans, which likely is due to their less developed eardrum. The role of tone-on-tone inhibition in the peripheral auditory system is questioned with regard to its significance in processing sounds of biological value.We wish to dedicate this paper to Jasper J. Loftus-Hills who was killed in a tragic accident near Austin, Texas on June 11, 1974. His post-doctoral appointment in our laboratory and his assistance in collecting spadefoot toads in the field recall fond memories.We also wish to thank R. Sage for helping us collect animals and W. F. Blair for supplying tape recordings ofScaphiopus mating calls. The assistance of J. Paton in photographing the animal in Fig. 1 is gratefully appreciated. This research was supported by the U.S. Public Health Service (NIH Research Grant NS-09244) and the National Science Foundation (Grant GB-18836); travel expenses involved in collecting animals were supported by a Cornell University Research Grant.  相似文献   

11.
Sparse representation of sounds in the unanesthetized auditory cortex   总被引:2,自引:0,他引:2  
How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli) in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second) in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second). At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.  相似文献   

12.
The past 30 years has seen a remarkable development in our understanding of how the auditory system--particularly the peripheral system--processes complex sounds. Perhaps the most significant has been our understanding of the mechanisms underlying auditory frequency selectivity and their importance for normal and impaired auditory processing. Physiologically vulnerable cochlear filtering can account for many aspects of our normal and impaired psychophysical frequency selectivity with important consequences for the perception of complex sounds. For normal hearing, remarkable mechanisms in the organ of Corti, involving enhancement of mechanical tuning (in mammals probably by feedback of electro-mechanically generated energy from the hair cells), produce exquisite tuning, reflected in the tuning properties of cochlear nerve fibres. Recent comparisons of physiological (cochlear nerve) and psychophysical frequency selectivity in the same species indicate that the ear's overall frequency selectivity can be accounted for by this cochlear filtering, at least in bandwidth terms. Because this cochlear filtering is physiologically vulnerable, it deteriorates in deleterious conditions of the cochlea--hypoxia, disease, drugs, noise overexposure, mechanical disturbance--and is reflected in impaired psychophysical frequency selectivity. This is a fundamental feature of sensorineural hearing loss of cochlear origin, and is of diagnostic value. This cochlear filtering, particularly as reflected in the temporal patterns of cochlear fibres to complex sounds, is remarkably robust over a wide range of stimulus levels. Furthermore, cochlear filtering properties are a prime determinant of the 'place' and 'time' coding of frequency at the cochlear nerve level, both of which appear to be involved in pitch perception. The problem of how the place and time coding of complex sounds is effected over the ear's remarkably wide dynamic range is briefly addressed. In the auditory brainstem, particularly the dorsal cochlear nucleus, are inhibitory mechanisms responsible for enhancing the spectral and temporal contrasts in complex sounds. These mechanisms are now being dissected neuropharmacologically. At the cortical level, mechanisms are evident that are capable of abstracting biologically relevant features of complex sounds. Fundamental studies of how the auditory system encodes and processes complex sounds are vital to promising recent applications in the diagnosis and rehabilitation of the hearing impaired.  相似文献   

13.
Objects and events can often be detected by more than one sensory system. Interactions between sensory systems can offer numerous benefits for the accuracy and completeness of the perception. Recent studies involving visual-auditory interactions have highlighted the perceptual advantages of combining information from these two modalities and have suggested that predominantly unimodal brain regions play a role in multisensory processing.  相似文献   

14.
Cohen L  Rothschild G  Mizrahi A 《Neuron》2011,72(2):357-369
Motherhood is associated with different forms of physiological alterations including transient hormonal changes and brain plasticity. The underlying impact of these changes on the emergence of maternal behaviors and sensory processing within the mother's brain are largely unknown. By using in?vivo cell-attached recordings in the primary auditory cortex of female mice, we discovered that exposure to pups' body odor reshapes neuronal responses to pure tones and natural auditory stimuli. This olfactory-auditory interaction appeared naturally in lactating mothers shortly after parturition and was long lasting. Naive virgins that had experience with the pups also showed an appearance of olfactory-auditory integration in A1, suggesting that multisensory integration may be experience dependent. Neurons from lactating mothers were more sensitive to sounds as compared to those from experienced mice, independent of the odor effects. These uni- and multisensory cortical changes may facilitate the detection and discrimination of pup distress calls and strengthen the bond between mothers and their neonates. VIDEO ABSTRACT:  相似文献   

15.
The representation of alternative conspecific acoustic signals in the responses of a pair of local interneurons of the bushcricket Tettigonia viridissima was studied with variation in intensity and the direction of sound signals. The results suggest that the auditory world of the bushcricket is rather sharply divided into two azimuthal hemispheres, with signals arriving from any direction within one hemisphere being predominantly represented in the discharge of neurons of this side of the auditory pathway. In addition, each pathway also selects for the most intense of several alternative sounds. A low-intensity signal at 45 dB sound pressure level is quite effective when presented alone, but completely suppressed when given simultaneously with another signal at 60 dB sound pressure level. In a series of intracellular experiments the synaptic nature of the intensity-dependent suppression of competitive signals was investigated in a number of interneurons. The underlying synaptic mechanism is based on a membrane hyperpolarisation with a time-constant in the order of 5–10 s. The significance of this mechanism for hearing in choruses, and for the evolution of acoustic signals and signalling behaviour is discussed. Accepted: 20 November 1999  相似文献   

16.
This paper reviews the basic aspects of auditory processing that play a role in the perception of speech. The frequency selectivity of the auditory system, as measured using masking experiments, is described and used to derive the internal representation of the spectrum (the excitation pattern) of speech sounds. The perception of timbre and distinctions in quality between vowels are related to both static and dynamic aspects of the spectra of sounds. The perception of pitch and its role in speech perception are described. Measures of the temporal resolution of the auditory system are described and a model of temporal resolution based on a sliding temporal integrator is outlined. The combined effects of frequency and temporal resolution can be modelled by calculation of the spectro-temporal excitation pattern, which gives good insight into the internal representation of speech sounds. For speech presented in quiet, the resolution of the auditory system in frequency and time usually markedly exceeds the resolution necessary for the identification or discrimination of speech sounds, which partly accounts for the robust nature of speech perception. However, for people with impaired hearing, speech perception is often much less robust.  相似文献   

17.
18.
Lesica NA  Grothe B 《PloS one》2008,3(2):e1655
In this study, we investigate the ability of the mammalian auditory pathway to adapt its strategy for temporal processing under natural stimulus conditions. We derive temporal receptive fields from the responses of neurons in the inferior colliculus to vocalization stimuli with and without additional ambient noise. We find that the onset of ambient noise evokes a change in receptive field dynamics that corresponds to a change from bandpass to lowpass temporal filtering. We show that these changes occur within a few hundred milliseconds of the onset of the noise and are evident across a range of overall stimulus intensities. Using a simple model, we illustrate how these changes in temporal processing exploit differences in the statistical properties of vocalizations and ambient noises to increase the information in the neural response in a manner consistent with the principles of efficient coding.  相似文献   

19.
20.
This evoked potential study of the bullfrog's auditory thalamic area (an auditory responsive region in the posterior dorsal thalamus) shows that complex processing, distinct from that reported in lower auditory regions, occurs in this center. An acoustic stimulus consisting of two tones, one which stimulates either the low-frequency or the mid-frequency sensitive population of auditory nerve fibers from the amphibian papilla and the other the high-frequency sensitive population of fibers from the basilar papilla, evoked a maximal response. The amplitude of the response to the simultaneous stimulation of the two auditory organs was, in some locations, much larger than the linear sum of the responses to the individual tones presented separately. Bimodal spectral stimuli that had relatively long rise-times (greater than or equal to 100 ms) evoked much larger responses than similar sounds with short rise-times. The optimal rise-times were close to those occurring in the bullfrog's mating call. The response was dependent on the waveform periodicity and harmonic content, with a fundamental frequency of 200 Hz producing a larger response than those with fundamentals of 50, 100 or 300 Hz. Six of the natural calls in the bullfrog's vocal repertoire were tested and the mating call and warning call were found to evoke the best responses. Each of these calls stimulate the two auditory organs simultaneously. The evoked response had a long refractory period which could not be altered by lesioning the efferent telencephalic pathways. The type of spectral and temporal information extracted by the auditory thalamic area suggests that this center is involved in processing complex sounds and likely plays an important role in the bullfrog's detection of some of its vocal signals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号