首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
Speech and other communication signals contain components of frequency and amplitude modulations (FM, AM) that often occur together. Auditory midbrain (or inferior colliculus, IC) is an important center for coding time-varying features of sounds. It remains unclear how IC neurons respond when FM and AM stimuli are both presented. Here we studied IC neurons in the urethane-anesthetized rats when animals were simultaneously stimulated with FM and AM tones. Of 122 units that were sensitive to the dual stimuli, the responses could be grossly divided into two types: one that resembled the respective responses to FM or AM stimuli presented separately ("simple" sensitivity, 45% of units), and another that appeared markedly different from their respective responses to FM or AM tones ("complex" sensitivity, 55%). These types of combinational sensitivities were further correlated with individual cell's frequency tuning pattern (response area) and with their common response pattern to FM and AM sounds. Results suggested that such combinational sensitivity could reflect local synaptic interactions on IC neurons and that the neural mechanisms could underlie more developed sensitivities to acoustic combinations found at the auditory cortex.  相似文献   

2.
Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.  相似文献   

3.
Receptive fields of single units in the auditory midbrain of anesthetized rats were studied using random FM-tone stimuli of narrow frequency-ranges. Peri-spike averaging of the modulating waveform first produced a spectro-temporal receptive field (STRF). Combining STRFs obtained from the same unit at different frequency regions generated a composite receptive field covering a wider frequency range of 2 to 3 octaves. About 20% of the composite STRFs (26/122) showed a pattern of multiple-bands which were not clear in the non-composite maps. Multiple-bands in a given composite map were often oriented in the same direction (representing upward or downward FM ramp) separated at rather regular frequency intervals. They reflect multiple FM trigger features in the stimulus rather than repetitive firing to a single trigger feature. Results showed that the subcortical auditory pathways are capable of detecting multiple FM features and such sensitivity could be useful in detecting multiple-harmonic FM bands present in the vocalization sounds.  相似文献   

4.
Barn owls have neurons sensitive to acoustic motion-direction in the midbrain. We report here that acoustic motion-direction sensitive neurons with receptive-field centres in frontal auditory space are not randomly distributed. In the inferior colliculus and optic tectum in the left (right) brain, the responses of about two-thirds of the motion-direction sensitive neurons were sensitive to clockwise (counter-clockwise) motion. The midbrain contains maps of auditory space that represent about 15 degrees of ipsilateral and all of contralateral space. Since a similar bias in motion-direction sensitivity was observed for neurons with receptive-field centres in ipsilateral as well as for neurons with receptive fields centres in contralateral auditory space, the brain side at which a motion-direction sensitive neuron was recorded was a more important predictor for the preferred direction of a cell than the spatial direction of the centre of the receptive field. Within one dorso-ventral electrode pass motion-direction sensitivity typically stayed constant suggesting a clustered or even a columnar-like organization. We hypothesize from these distributions that the right brain is important for orientating movements to the left hemisphere and vice versa.  相似文献   

5.
We have investigated responses of the auditory nerve fibres (ANFS) and anteroventral cochlear nucleus (AVCN) units to narrowband 'single-formant' stimuli (SFSS). We found that low and medium spontaneous rate (SR) ANFS maintain greater amplitude modulation (AM) in their responses at high sound levels than do high SR units when sound level is considered in dB SPL. However, this partitioning of high and low SR units disappears if sound level is considered in dB relative to unit threshold. Stimuli with carrier frequencies away from unit best frequency (BF) were found to generate higher AM in responses at high sound levels than that observed even in most low and medium SR units for stimuli with carrier frequencies near BF. AVCN units were shown to have increased modulation depth in their responses when compared with high SR ANFS with similar BFS and to have increased or comparable modulation depth when compared with low SR ANFS. At sound levels where AM almost completely disappears in high SR ANFS, most AVCN units we studied still show significant AM in their responses. Using a dendritic model, we investigated possible mechanisms of enhanced AM in AVCN units, including the convergence of inputs from different SR groups of ANFS and a postsynaptic threshold mechanism in the soma.  相似文献   

6.
The mating (advertisement) calls of two sibling species of gray treefrogs, Hyla versicolor and Hyla chrysoscelis, are spectrally identical but differ in trill rate; being higher for H. chrysoscelis. Single-unit recordings were made from the torus semicircularis of both species to investigate the neural mechanisms by which this species-specific temporal feature is analyzed. Using sinusoidally amplitude-modulated (AM) white noise as a stimulus, the temporal selectivity of these midbrain auditory neurons could be described by five response categories: 'AM nonselective' (34%); 'AM high-pass' (7%); 'AM low-pass' (6%); 'AM band-suppression' (12%); 'AM tuned' (40%). The distributions of temporal tuning values (i.e., modulation rate at which each AM-tuned unit responds maximally) are broad; in both species, neurons were found which were tuned to modulation rates greater than those found in their advertisement calls. Nevertheless, the temporal tuning values for H. versicolor (median = 25 Hz) were significantly lower than those for H. chrysoscelis (median = 32.5 Hz). The temporal selectivities of AM band-suppression neurons were found to be temperature dependent. The modulation rate at which a response minimum was observed shifted to higher values as the temperature was elevated. These results extend our earlier findings of temperature-dependent temporal selectivity in the gray treefrog. The selectivity of band-suppression and AM-tuned neurons to various rates of amplitude modulation was largely, but not completely, independent of whether sinusoidal or natural forms of AM were used.  相似文献   

7.
Liu X  Yan Y  Wang Y  Yan J 《PloS one》2010,5(11):e14038

Background

Cortical neurons implement a high frequency-specific modulation of subcortical nuclei that includes the cochlear nucleus. Anatomical studies show that corticofugal fibers terminating in the auditory thalamus and midbrain are mostly ipsilateral. Differently, corticofugal fibers terminating in the cochlear nucleus are bilateral, which fits to the needs of binaural hearing that improves hearing quality. This leads to our hypothesis that corticofugal modulation of initial neural processing of sound information from the contralateral and ipsilateral ears could be equivalent or coordinated at the first sound processing level.

Methodology/Principal Findings

With the focal electrical stimulation of the auditory cortex and single unit recording, this study examined corticofugal modulation of the ipsilateral cochlear nucleus. The same methods and procedures as described in our previous study of corticofugal modulation of contralateral cochlear nucleus were employed simply for comparison. We found that focal electrical stimulation of cortical neurons induced substantial changes in the response magnitude, response latency and receptive field of ipsilateral cochlear nucleus neurons. Cortical stimulation facilitated auditory response and shortened the response latency of physiologically matched neurons whereas it inhibited auditory response and lengthened the response latency of unmatched neurons. Finally, cortical stimulation shifted the best frequencies of cochlear neurons towards those of stimulated cortical neurons.

Conclusion

Our data suggest that cortical neurons enable a high frequency-specific remodelling of sound information processing in the ipsilateral cochlear nucleus in the same manner as that in the contralateral cochlear nucleus.  相似文献   

8.
Unique patterns of spike activity across neuron populations have been implicated in the coding of complex sensory stimuli. Delineating the patterns of neural activity in response to varying stimulus parameters and their relationships to the tuning characteristics of individual neurons is essential to ascertaining the nature of population coding within the brain. Here, we address these points in the midbrain coding of concurrent vocal signals of a sound-producing fish, the plainfin midshipman. Midshipman produce multiharmonic vocalizations which frequently overlap to produce beats. We used multivariate statistical analysis from single-unit recordings across multiple animals to assess the presence of a temporal population code. Our results show that distinct patterns of temporal activity emerge among midbrain neurons in response to concurrent signals that vary in their difference frequency. These patterns can serve to code beat difference frequencies. The patterns directly result from the differential temporal coding of difference frequency by individual neurons. Difference frequency encoding, based on temporal patterns of activity, could permit the segregation of concurrent vocal signals on time scales shorter than codes requiring averaging. Given the ubiquity across vertebrates of auditory midbrain tuning to the temporal structure of acoustic signals, a similar temporal population code is likely present in other species.  相似文献   

9.
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.  相似文献   

10.
Species-specific vocalizations in mice have frequency-modulated (FM) components slower than the lower limit of FM direction selectivity in the core region of the mouse auditory cortex. To identify cortical areas selective to slow frequency modulation, we investigated tonal responses in the mouse auditory cortex using transcranial flavoprotein fluorescence imaging. For differentiating responses to frequency modulation from those to stimuli at constant frequencies, we focused on transient fluorescence changes after direction reversal of temporally repeated and superimposed FM sweeps. We found that the ultrasonic field (UF) in the belt cortical region selectively responded to the direction reversal. The dorsoposterior field (DP) also responded weakly to the reversal. Regarding the responses in UF, no apparent tonotopic map was found, and the right UF responses were significantly larger in amplitude than the left UF responses. The half-max latency in responses to FM sweeps was shorter in UF compared with that in the primary auditory cortex (A1) or anterior auditory field (AAF). Tracer injection experiments in the functionally identified UF and DP confirmed that these two areas receive afferent inputs from the dorsal part of the medial geniculate nucleus (MG). Calcium imaging of UF neurons stained with fura-2 were performed using a two-photon microscope, and the presence of UF neurons that were selective to both direction and direction reversal of slow frequency modulation was demonstrated. These results strongly suggest a role for UF, and possibly DP, as cortical areas specialized for processing slow frequency modulation in mice.  相似文献   

11.
In songbirds, species identity and developmental experience shape vocal behavior and behavioral responses to vocalizations. The interaction of species identity and developmental experience may also shape the coding properties of sensory neurons. We tested whether responses of auditory midbrain and forebrain neurons to songs differed between species and between groups of conspecific birds with different developmental exposure to song. We also compared responses of individual neurons to conspecific and heterospecific songs. Zebra and Bengalese finches that were raised and tutored by conspecific birds, and zebra finches that were cross‐tutored by Bengalese finches were studied. Single‐unit responses to zebra and Bengalese finch songs were recorded and analyzed by calculating mutual information (MI), response reliability, mean spike rate, fluctuations in time‐varying spike rate, distributions of time‐varying spike rates, and neural discrimination of individual songs. MI quantifies a response's capacity to encode information about a stimulus. In midbrain and forebrain neurons, MI was significantly higher in normal zebra finch neurons than in Bengalese finch and cross‐tutored zebra finch neurons, but not between Bengalese finch and cross‐tutored zebra finch neurons. Information rate differences were largely due to spike rate differences. MI did not differ between responses to conspecific and heterospecific songs. Therefore, neurons from normal zebra finches encoded more information about songs than did neurons from other birds, but conspecific and heterospecific songs were encoded equally. Neural discrimination of songs and MI were highly correlated. Results demonstrate that developmental exposure to vocalizations shapes the information coding properties of songbird auditory neurons. © 2009 Wiley Periodicals, Inc. Develop Neurobiol 70: 235–252, 2010.  相似文献   

12.
Responses of multi-units in the auditory cortex (AC) of unanaesthetized Mongolian gerbils to pure tones and to linearly frequency modulated (FM) sounds were analysed. Three types of responses to pure tones could be clearly distinguished on the basis of spectral tuning properties, response latencies and overall temporal response pattern. In response to FM sweeps these three types discharged in a temporal pattern similar to tone responses. However, for all type-1 units the latencies of some phasic response components shifted systematically as a function of range and/or speed of modulation. Measurements of response latencies to FMs revealed that such responses were evoked whenever the modulation reached a particular instantaneous frequency (Fi). Effective Fi was: (1) independent of modulation range and speed, (2) always reached before the modulation arrived at a local maximum of the frequency response function (FRF) and consequently differed for downward and upward sweeps, and (3) was correlated with the steepest slope of that FRF maximum. The three different types of units were found in discrete and separate fields or regions of the AC. It is concluded that gross temporal response properties are one of the key features distinguishing auditory cortical regions in the Mongolian gerbil. Accepted: 13 August 1997  相似文献   

13.
Capturing nature’s statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl’s midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.  相似文献   

14.
Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.  相似文献   

15.
Periodic envelope or amplitude modulations (AM) with periodicities up to several thousand Hertz are characteristic for many natural sounds. Throughout the auditory pathway, signal periodicity is evident in neuronal discharges phase-locked to the envelope. In contrast to lower levels of the auditory pathway, cortical neurons do not phase-lock to periodicities above about 100 Hz. Therefore, we investigated alternative coding strategies for high envelope periodicities at the cortical level. Neuronal responses in the primary auditory cortex (AI) of gerbils to tones and AM were analysed. Two groups of stimuli were tested: (1) AM with a carrier frequency set to the unit's best frequency evoked phase-locked responses which were confined to low modulation frequencies (fms) up to about 100 Hz, and (2) AM with a spectrum completely outside the unit's frequency-response range evoked completely different responses that never showed phase-locking but a rate-tuning to high fms (50 to about 3000 Hz). In contrast to the phase-locked responses, the best fms determined from these latter responses appeared to be topographically distributed, reflecting a periodotopic organization in the AI. Implications of these results for the cortical representation of the perceptual qualities rhythm, roughness and pitch are discussed. Accepted: 25 July 1997  相似文献   

16.
In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.  相似文献   

17.
Experience-dependent plasticity of receptive fields in the auditory cortex has been demonstrated by electrophysiological experiments in animals. In the present study we used PET neuroimaging to measure regional brain activity in volunteer human subjects during discriminatory classical conditioning of high (8000 Hz) or low (200 Hz) frequency tones by an aversive 100 dB white noise burst. Conditioning-related, frequency-specific modulation of tonotopic neural responses in the auditory cortex was observed. The modulated regions of the auditory cortex positively covaried with activity in the amygdala, basal forebrain and orbitofrontal cortex, and showed context-specific functional interactions with the medial geniculate nucleus. These results accord with animal single-unit data and support neurobiological models of auditory conditioning and value-dependent neural selection.  相似文献   

18.
Schnupp J 《Neuron》2006,51(3):278-280
Responses in auditory cortex tend to be weaker, more phasic, and noisier than those of auditory brainstem and midbrain nuclei. Is the activity in cortex therefore merely a "degraded echo" of lower-level neural representations? In this issue of Neuron, Chechik and colleagues show that, while cortical responses indeed convey less sensory information than auditory midbrain neurons, their responses are also much less redundant.  相似文献   

19.
Distributed coding of sound locations in the auditory cortex   总被引:3,自引:0,他引:3  
Although the auditory cortex plays an important role in sound localization, that role is not well understood. In this paper, we examine the nature of spatial representation within the auditory cortex, focusing on three questions. First, are sound-source locations encoded by individual sharply tuned neurons or by activity distributed across larger neuronal populations? Second, do temporal features of neural responses carry information about sound-source location? Third, are any fields of the auditory cortex specialized for spatial processing? We present a brief review of recent work relevant to these questions along with the results of our investigations of spatial sensitivity in cat auditory cortex. Together, they strongly suggest that space is represented in a distributed manner, that response timing (notably first-spike latency) is a critical information-bearing feature of cortical responses, and that neurons in various cortical fields differ in both their degree of spatial sensitivity and their manner of spatial coding. The posterior auditory field (PAF), in particular, is well suited for the distributed coding of space and encodes sound-source locations partly by modulations of response latency. Studies of neurons recorded simultaneously from PAF and/or A1 reveal that spatial information can be decoded from the relative spike times of pairs of neurons - particularly when responses are compared between the two fields - thus partially compensating for the absence of an absolute reference to stimulus onset.  相似文献   

20.
Sexual selection and signal detection theories predict that females should be selective in their responses to mating signals in mate choice, while the response of males to signals in male competition should be less selective. The neural processes underlying this behavioural sex difference remain obscure. Differences in behavioural selectivity could result from differences in how sensitive sensory systems are to mating signals, distinct thresholds in motor areas regulating behaviour, or sex differences in selectivity at a gateway relaying sensory information to motor systems. We tested these hypotheses in frogs using the expression of egr-1 to quantify the neural responses of each sex to mating signals. We found that egr-1 expression in a midbrain auditory region was elevated in males in response to both conspecific and heterospecific calls, whereas in females, egr-1 induction occurred only in response to conspecific signals. This differential neural selectivity mirrored the sex differences in behavioural responsiveness to these stimuli. By contrast, egr-1 expression in lower brainstem auditory centres was not different in males and females. Our results support a model in which sex differences in behavioural selectivity arise from sex differences in the neural selectivity in midbrain areas relaying sensory information to the forebrain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号