首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Chang TR  Chung PC  Chiu TW  Poon PW 《Bio Systems》2005,79(1-3):213-222
Sensitivity of central auditory neurons to frequency modulated (FM) sound is often characterized based on spectro-temporal receptive field (STRF), which is generated by spike-trigger averaging a random stimulus. Due to the inherent property of time variability in neural response, this method erroneously represents the response jitter as stimulus jitter in the STRF. To reveal the trigger features more clearly, we have implemented a method that minimizes this error. Neural spikes from the brainstem of urethane-anesthetized rats were first recorded in response to two sets of FM stimuli: (a) a random FM tone for the generation of STRF and (b) a family of linear FM ramps for the determination of FM 'trigger point'. Based on the first dataset, STRFs were generated using spike-trigger averaging. Individual modulating waveforms were then matched with respect to their mean waveform at time-windows of a systematically varied length. A stable or optimal variance time profile was found at a particular window length. At this optimal window length, we performed delay adjustments. A marked sharpening in the FM bands in the STRF was found. Results were consistent with the FM 'trigger point' as estimated by the linear FM ramps. We concluded that the present approach of adjusting response jitter was effective in delineating FM trigger features in the STRF.  相似文献   

2.
Receptive fields of single units in the auditory midbrain of anesthetized rats were studied using random FM-tone stimuli of narrow frequency-ranges. Peri-spike averaging of the modulating waveform first produced a spectro-temporal receptive field (STRF). Combining STRFs obtained from the same unit at different frequency regions generated a composite receptive field covering a wider frequency range of 2 to 3 octaves. About 20% of the composite STRFs (26/122) showed a pattern of multiple-bands which were not clear in the non-composite maps. Multiple-bands in a given composite map were often oriented in the same direction (representing upward or downward FM ramp) separated at rather regular frequency intervals. They reflect multiple FM trigger features in the stimulus rather than repetitive firing to a single trigger feature. Results showed that the subcortical auditory pathways are capable of detecting multiple FM features and such sensitivity could be useful in detecting multiple-harmonic FM bands present in the vocalization sounds.  相似文献   

3.
In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.  相似文献   

4.
Spectro-Temporal Receptive Fields (STRFs) were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM). A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl’s gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl’s gyrus recordings elicited by click-train stimuli.  相似文献   

5.
6.
Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models.  相似文献   

7.
Spectro-temporal properties of auditory cortex neurons have been extensively studied with artificial sounds but it is still unclear whether they help in understanding neuronal responses to communication sounds. Here, we directly compared spectro-temporal receptive fields (STRFs) obtained from the same neurons using both artificial stimuli (dynamic moving ripples, DMRs) and natural stimuli (conspecific vocalizations) that were matched in terms of spectral content, average power and modulation spectrum. On a population of auditory cortex neurons exhibiting reliable tuning curves when tested with pure tones, significant STRFs were obtained for 62% of the cells with vocalizations and 68% with DMR. However, for many cells with significant vocalization-derived STRFs (STRFvoc) and DMR-derived STRFs (STRFdmr), the BF, latency, bandwidth and global STRFs shape differed more than what would be predicted by spiking responses simulated by a linear model based on a non-homogenous Poisson process. Moreover STRFvoc predicted neural responses to vocalizations more accurately than STRFdmr predicted neural response to DMRs, despite similar spike-timing reliability for both sets of stimuli. Cortical bursts, which potentially introduce nonlinearities in evoked responses, did not explain the differences between STRFvoc and STRFdmr. Altogether, these results suggest that the nonlinearity of auditory cortical responses makes it difficult to predict responses to communication sounds from STRFs computed from artificial stimuli.  相似文献   

8.

Background

Radial intra- and interlaminar connections form a basic microcircuit in primary auditory cortex (AI) that extracts acoustic information and distributes it to cortical and subcortical networks. Though the structure of this microcircuit is known, we do not know how the functional connectivity between layers relates to laminar processing.

Methodology/Principal Findings

We studied the relationships between functional connectivity and receptive field properties in this columnar microcircuit by simultaneously recording from single neurons in cat AI in response to broadband dynamic moving ripple stimuli. We used spectrotemporal receptive fields (STRFs) to estimate the relationship between receptive field parameters and the functional connectivity between pairs of neurons. Interlaminar connectivity obtained through cross-covariance analysis reflected a consistent pattern of information flow from thalamic input layers to cortical output layers. Connection strength and STRF similarity were greatest for intralaminar neuron pairs and in supragranular layers and weaker for interlaminar projections. Interlaminar connection strength co-varied with several STRF parameters: feature selectivity, phase locking to the stimulus envelope, best temporal modulation frequency, and best spectral modulation frequency. Connectivity properties and receptive field relationships differed for vertical and horizontal connections.

Conclusions/Significance

Thus, the mode of local processing in supragranular layers differs from that in infragranular layers. Therefore, specific connectivity patterns in the auditory cortex shape the flow of information and constrain how spectrotemporal processing transformations progress in the canonical columnar auditory microcircuit.  相似文献   

9.
The responses of cortical neurons are often characterized by measuring their spectro-temporal receptive fields (STRFs). The STRF of a cell can be thought of as a representation of its stimulus 'preference' but it is also a filter or 'kernel' that represents the best linear prediction of the response of that cell to any stimulus. A range of in vivo STRFs with varying properties have been reported in various species, although none in humans. Using a computational model it has been shown that responses of ensembles of artificial STRFs, derived from limited sets of formative stimuli, preserve information about utterance class and prosody as well as the identity and sex of the speaker in a model speech classification system. In this work we help to put this idea on a biologically plausible footing by developing a simple model thalamo-cortical system built of conductance based neurons and synapses some of which exhibit spike-time-dependent plasticity. We show that the neurons in such a model when exposed to formative stimuli develop STRFs with varying temporal properties exhibiting a range of heterotopic integration. These model neurons also, in common with neurons measured in vivo, exhibit a wide range of non-linearities; this deviation from linearity can be exposed by characterizing the difference between the measured response of each neuron to a stimulus, and the response predicted by the STRF estimated for that neuron. The proposed model, with its simple architecture, learning rule, and modest number of neurons (<1000), is suitable for implementation in neuromorphic analogue VLSI hardware and hence could form the basis of a developmental, real time, neuromorphic sound classification system.  相似文献   

10.
So far, most studies of core auditory cortex (AC) have characterized the spectral and temporal tuning properties of cells in non-awake, anesthetized preparations. As experiments in awake animals are scarce, we here used dynamic spectral-temporal broadband ripples to study the properties of the spectrotemporal receptive fields (STRFs) of AC cells in awake monkeys. We show that AC neurons were typically most sensitive to low ripple densities (spectral) and low velocities (temporal), and that most cells were not selective for a particular spectrotemporal sweep direction. A substantial proportion of neurons preferred amplitude-modulated sounds (at zero ripple density) to dynamic ripples (at non-zero densities). The vast majority (>93%) of modulation transfer functions were separable with respect to spectral and temporal modulations, indicating that time and spectrum are independently processed in AC neurons. We also analyzed the linear predictability of AC responses to natural vocalizations on the basis of the STRF. We discuss our findings in the light of results obtained from the monkey midbrain inferior colliculus by comparing the spectrotemporal tuning properties and linear predictability of these two important auditory stages.  相似文献   

11.
The effects of nonlinear interactions between different sound frequencies on the responses of neurons in primary auditory cortex (AI) have only been investigated using two-tone paradigms. Here we stimulated with relatively dense, Poisson-distributed trains of tone pips (with frequency ranges spanning five octaves, 16 frequencies /octave, and mean rates of 20 or 120 pips /s), and examined within-frequency (or auto-frequency) and cross-frequency interactions in three types of AI unit responses by computing second-order “Poisson-Wiener” auto- and cross-kernels. Units were classified on the basis of their spectrotemporal receptive fields (STRFs) as “double-peaked”, “single-peaked” or “peak-valley”. Second-order interactions were investigated between the two bands of excitatory frequencies on double-peaked STRFs, between an excitatory band and various non-excitatory bands on single-peaked STRFs, and between an excitatory band and an inhibitory sideband on peak-valley STRFs. We found that auto-frequency interactions (i.e., those within a single excitatory band) were always characterized by a strong depression of (first-order) excitation that decayed with the interstimulus lag up to ~200 ms. That depression was weaker in cross-frequency compared to auto-frequency interactions for ~25% of dual-peaked STRFs, evidence of “combination sensitivity” for the two bands. Non-excitatory and inhibitory frequencies (on single-peaked and peak-valley STRFs, respectively) typically weakly depressed the excitatory response at short interstimulus lags (<50 ms), but weakly facilitated it at longer lags (~50–200 ms). Both the depression and especially the facilitation were stronger for interactions with inhibitory frequencies rather than just non-excitatory ones. Finally, facilitation in single-peaked and peak-valley units decreased with increasing stimulus density. Our results indicate that the strong combination sensitivity and cross-frequency facilitation suggested by previous two-tone-paradigm studies are much less pronounced when using more temporally-dense stimuli.  相似文献   

12.
Analysis of sensory neurons'' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron''s receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron''s receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.  相似文献   

13.
Speech and other communication signals contain components of frequency and amplitude modulations (FM, AM) that often occur together. Auditory midbrain (or inferior colliculus, IC) is an important center for coding time-varying features of sounds. It remains unclear how IC neurons respond when FM and AM stimuli are both presented. Here we studied IC neurons in the urethane-anesthetized rats when animals were simultaneously stimulated with FM and AM tones. Of 122 units that were sensitive to the dual stimuli, the responses could be grossly divided into two types: one that resembled the respective responses to FM or AM stimuli presented separately ("simple" sensitivity, 45% of units), and another that appeared markedly different from their respective responses to FM or AM tones ("complex" sensitivity, 55%). These types of combinational sensitivities were further correlated with individual cell's frequency tuning pattern (response area) and with their common response pattern to FM and AM sounds. Results suggested that such combinational sensitivity could reflect local synaptic interactions on IC neurons and that the neural mechanisms could underlie more developed sensitivities to acoustic combinations found at the auditory cortex.  相似文献   

14.
Species-specific vocalizations in mice have frequency-modulated (FM) components slower than the lower limit of FM direction selectivity in the core region of the mouse auditory cortex. To identify cortical areas selective to slow frequency modulation, we investigated tonal responses in the mouse auditory cortex using transcranial flavoprotein fluorescence imaging. For differentiating responses to frequency modulation from those to stimuli at constant frequencies, we focused on transient fluorescence changes after direction reversal of temporally repeated and superimposed FM sweeps. We found that the ultrasonic field (UF) in the belt cortical region selectively responded to the direction reversal. The dorsoposterior field (DP) also responded weakly to the reversal. Regarding the responses in UF, no apparent tonotopic map was found, and the right UF responses were significantly larger in amplitude than the left UF responses. The half-max latency in responses to FM sweeps was shorter in UF compared with that in the primary auditory cortex (A1) or anterior auditory field (AAF). Tracer injection experiments in the functionally identified UF and DP confirmed that these two areas receive afferent inputs from the dorsal part of the medial geniculate nucleus (MG). Calcium imaging of UF neurons stained with fura-2 were performed using a two-photon microscope, and the presence of UF neurons that were selective to both direction and direction reversal of slow frequency modulation was demonstrated. These results strongly suggest a role for UF, and possibly DP, as cortical areas specialized for processing slow frequency modulation in mice.  相似文献   

15.
Spectro-temporal receptive fields (STRFs) have been widely used as linear approximations to the signal transform from sound spectrograms to neural responses along the auditory pathway. Their dependence on statistical attributes of the stimuli, such as sound intensity, is usually explained by nonlinear mechanisms and models. Here, we apply an efficient coding principle which has been successfully used to understand receptive fields in early stages of visual processing, in order to provide a computational understanding of the STRFs. According to this principle, STRFs result from an optimal tradeoff between maximizing the sensory information the brain receives, and minimizing the cost of the neural activities required to represent and transmit this information. Both terms depend on the statistical properties of the sensory inputs and the noise that corrupts them. The STRFs should therefore depend on the input power spectrum and the signal-to-noise ratio, which is assumed to increase with input intensity. We analytically derive the optimal STRFs when signal and noise are approximated as Gaussians. Under the constraint that they should be spectro-temporally local, the STRFs are predicted to adapt from being band-pass to low-pass filters as the input intensity reduces, or the input correlation becomes longer range in sound frequency or time. These predictions qualitatively match physiological observations. Our prediction as to how the STRFs should be determined by the input power spectrum could readily be tested, since this spectrum depends on the stimulus ensemble. The potentials and limitations of the efficient coding principle are discussed.  相似文献   

16.
Poon PW  Chiu TW 《Bio Systems》2000,58(1-3):229-237
Complex sounds, including human speech, contain time-varying signals like frequency modulation (FM) and amplitude modulation (AM) components. In spite of various attempts to characterize their neuronal coding in the mammalian auditory systems, a unified view of their responses has not been reached. We compared FM and AM coding in terms of receptive space with reference to the input-output relationship of the underlying neural circuits. Using extracellular recording, single unit responses to a novel stimulus (i.e. random AM or FM tone) were obtained at the auditory midbrain of the anesthetized rat. Responses could be classified into three general types, corresponding to selective sensitivity to one of the following aspects of the modulation: (a) steady state, (b) dynamic state, or (c) steady-and-dynamic states. Such response typing was basically similar between FM and AM stimuli. Furthermore, the receptive space of each unit could be characterized in a three-dimensional Cartesian co-ordinate system formed by three modulation parameters: velocity, range and intensity. This representation applies to both FM and AM responses. We concluded that the FM and AM codings are very similar at the auditory midbrain and may likely involve similar neural mechanisms.  相似文献   

17.
The spectro-temporal receptive field (STRF) of an auditory neuron describes the linear relationship between the sound stimulus in a time-frequency representation and the neural response. Time-frequency representations of a sound in turn require a nonlinear operation on the sound pressure waveform and many different forms for this non-linear transformation are possible. Here, we systematically investigated the effects of four factors in the non-linear step in the STRF model: the choice of logarithmic or linear filter frequency spacing, the time-frequency scale, stimulus amplitude compression and adaptive gain control. We quantified the goodness of fit of these different STRF models on data obtained from auditory neurons in the songbird midbrain and forebrain. We found that adaptive gain control and the correct stimulus amplitude compression scheme are paramount to correctly modelling neurons. The time-frequency scale and frequency spacing also affected the goodness of fit of the model but to a lesser extent and the optimal values were stimulus dependant. Action Editor: Israel Nelken  相似文献   

18.
Responses of multi-units in the auditory cortex (AC) of unanaesthetized Mongolian gerbils to pure tones and to linearly frequency modulated (FM) sounds were analysed. Three types of responses to pure tones could be clearly distinguished on the basis of spectral tuning properties, response latencies and overall temporal response pattern. In response to FM sweeps these three types discharged in a temporal pattern similar to tone responses. However, for all type-1 units the latencies of some phasic response components shifted systematically as a function of range and/or speed of modulation. Measurements of response latencies to FMs revealed that such responses were evoked whenever the modulation reached a particular instantaneous frequency (Fi). Effective Fi was: (1) independent of modulation range and speed, (2) always reached before the modulation arrived at a local maximum of the frequency response function (FRF) and consequently differed for downward and upward sweeps, and (3) was correlated with the steepest slope of that FRF maximum. The three different types of units were found in discrete and separate fields or regions of the AC. It is concluded that gross temporal response properties are one of the key features distinguishing auditory cortical regions in the Mongolian gerbil. Accepted: 13 August 1997  相似文献   

19.
The representation of sound information in the central nervous system relies on the analysis of time-varying features in communication and other environmental sounds. How are auditory physiologists and theoreticians to choose an appropriate method for characterizing spectral and temporal acoustic feature representations in single neurons and neural populations? A brief survey of currently available scientific methods and their potential usefulness is given, with a focus on the strengths and weaknesses of using noise analysis techniques for approximating spectrotemporal response fields (STRFs). Noise analysis has been used to foster several conceptual advances in describing neural acoustic feature representation in a variety of species and auditory nuclei. STRFs have been used to quantitatively assess spectral and temporal transformations across mutually connected auditory nuclei, to identify neuronal interactions between spectral and temporal sound dimensions, and to compare linear vs. nonlinear response properties through state-dependent comparisons. We propose that noise analysis techniques used in combination with novel stimulus paradigms and parametric experiment designs will provide powerful means of exploring acoustic feature representations in the central nervous system.  相似文献   

20.
Spectral integration properties show topographical order in cat primary auditory cortex (AI). Along the iso-frequency domain, regions with predominantly narrowly tuned (NT) neurons are segregated from regions with more broadly tuned (BT) neurons, forming distinct processing modules. Despite their prominent spatial segregation, spectrotemporal processing has not been compared for these regions. We identified these NT and BT regions with broad-band ripple stimuli and characterized processing differences between them using both spectrotemporal receptive fields (STRFs) and nonlinear stimulus/firing rate transformations. The durations of STRF excitatory and inhibitory subfields were shorter and the best temporal modulation frequencies were higher for BT neurons than for NT neurons. For NT neurons, the bandwidth of excitatory and inhibitory subfields was matched, whereas for BT neurons it was not. Phase locking and feature selectivity were higher for NT neurons. Properties of the nonlinearities showed only slight differences across the bandwidth modules. These results indicate fundamental differences in spectrotemporal preferences--and thus distinct physiological functions--for neurons in BT and NT spectral integration modules. However, some global processing aspects, such as spectrotemporal interactions and nonlinear input/output behavior, appear to be similar for both neuronal subgroups. The findings suggest that spectral integration modules in AI differ in what specific stimulus aspects are processed, but they are similar in the manner in which stimulus information is processed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号