首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons'' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.  相似文献   

2.
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.  相似文献   

3.
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex.  相似文献   

4.

Background

Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown.

Methodology/Principal Findings

Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons.

Conclusions/Significance

These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.  相似文献   

5.
Neuronal responses in auditory cortex show a fascinating mixture of characteristics that span the range from almost perfect copies of physical aspects of the stimuli to extremely complex context-dependent responses. Fast, highly stimulus-specific adaptation and slower plastic mechanisms work together to constantly adjust neuronal response properties to the statistics of the auditory scene. Evidence with converging implications suggests that the neuronal activity in primary auditory cortex represents sounds in terms of auditory objects rather than in terms of invariant acoustic features.  相似文献   

6.
The representation of sound information in the central nervous system relies on the analysis of time-varying features in communication and other environmental sounds. How are auditory physiologists and theoreticians to choose an appropriate method for characterizing spectral and temporal acoustic feature representations in single neurons and neural populations? A brief survey of currently available scientific methods and their potential usefulness is given, with a focus on the strengths and weaknesses of using noise analysis techniques for approximating spectrotemporal response fields (STRFs). Noise analysis has been used to foster several conceptual advances in describing neural acoustic feature representation in a variety of species and auditory nuclei. STRFs have been used to quantitatively assess spectral and temporal transformations across mutually connected auditory nuclei, to identify neuronal interactions between spectral and temporal sound dimensions, and to compare linear vs. nonlinear response properties through state-dependent comparisons. We propose that noise analysis techniques used in combination with novel stimulus paradigms and parametric experiment designs will provide powerful means of exploring acoustic feature representations in the central nervous system.  相似文献   

7.
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.  相似文献   

8.
Much of what we know regarding the effect of stimulus repetition on neuroelectric adaptation comes from studies using artificially produced pure tones or harmonic complex sounds. Little is known about the neural processes associated with the representation of everyday sounds and how these may be affected by aging. In this study, we used real life, meaningful sounds presented at various azimuth positions and found that auditory evoked responses peaking at about 100 and 180 ms after sound onset decreased in amplitude with stimulus repetition. This neural adaptation was greater in young than in older adults and was more pronounced when the same sound was repeated at the same location. Moreover, the P2 waves showed differential patterns of domain-specific adaptation when location and identity was repeated among young adults. Background noise decreased ERP amplitudes and modulated the magnitude of repetition effects on both the N1 and P2 amplitude, and the effects were comparable in young and older adults. These findings reveal an age-related difference in the neural processes associated with adaptation to meaningful sounds, which may relate to older adults’ difficulty in ignoring task-irrelevant stimuli.  相似文献   

9.
We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogram representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus and cortex, and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds.  相似文献   

10.
Schwabe L  Obermayer K 《Bio Systems》2002,67(1-3):239-244
Rapid adaptation is a prominent feature of biological neuronal systems. From a functional perspective the adaptation of neuronal properties, namely the input-output relation of sensory neurons, is usually interpreted as an adaptation of the sensory system to changing environments as characterized by their stimulus statistics. Here we argue that this interpretation is only applicable as long as the adaptation processes are slower than the time-scale at which the stimulus statistics change. We present a definition of optimality of a neuronal code which still captures the idea of efficient coding, but which can also explain rapid adaptation without referring to an adaptation to different sensory environments. Finally, we apply our new idea to a simple model of an orientation hypercolumn in the primary visual cortex and predict that the interactions between orientation columns should adapt at the time-scale of a single stimulus presentation.  相似文献   

11.
Sparse representation of sounds in the unanesthetized auditory cortex   总被引:2,自引:0,他引:2  
How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli) in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second) in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second). At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.  相似文献   

12.
Immediate early genes (IEGs) are widely used as markers to delineate neuronal circuits because they show fast and transient expression induced by various behavioral paradigms. In this study, we investigated the expression of the IEGs c-fos and Arc in the auditory cortex of the mouse after auditory cued fear conditioning using quantitative polymerase chain reaction and microarray analysis. To test for the specificity of the IEG induction, we included several control groups that allowed us to test for factors other than associative learning to sounds that could lead to an induction of IEGs. We found that both c-fos and Arc showed strong and robust induction after auditory fear conditioning. However, we also observed increased expression of both genes in any control paradigm that involved shocks, even when no sounds were presented. Using mRNA microarrays and comparing the effect of the various behavioral paradigms on mRNA expression levels, we did not find genes being selectively upregulated in the auditory fear conditioned group. In summary, our results indicate that the use of IEGs to identify neuronal circuits involved specifically in processing of sound cues in the fear conditioning paradigm can be limited by the effects of the aversive unconditional stimulus and that activity levels in a particular primary sensory cortical area can be strongly influenced by stimuli mediated by other modalities.  相似文献   

13.
The present study used an optical imaging paradigm to investigate plastic changes in the auditory cortex induced by fear conditioning, in which a sound (conditioned stimulus, CS) was paired with an electric foot-shock (unconditioned stimulus, US). We report that, after conditioning, auditory information could be retrieved on the basis of an electric foot-shock alone. Before conditioning, the auditory cortex showed no response to a foot-shock presented in the absence of sound. In contrast, after conditioning, the mere presentation of a foot-shock without any sound succeeded in eliciting activity in the auditory cortex. Additionally, the magnitude of the optical response in the auditory cortex correlated with variation in the electrocardiogram (correlation coefficient: −0.68). The area activated in the auditory cortex, in response to the electric foot-shock, statistically significantly had a larger cross-correlation value for tone response to the CS sound (12 kHz) compared to the non-CS sounds in normal conditioning group. These results suggest that integration of different sensory modalities in the auditory cortex was established by fear conditioning.  相似文献   

14.
Sensory systems adapt their neural code to changes in the sensory environment, often on multiple time scales. Here, we report a new form of adaptation in a first-order auditory interneuron (AN2) of crickets. We characterize the response of the AN2 neuron to amplitude-modulated sound stimuli and find that adaptation shifts the stimulus-response curves toward higher stimulus intensities, with a time constant of 1.5 s for adaptation and recovery. The spike responses were thus reduced for low-intensity sounds. We then address the question whether adaptation leads to an improvement of the signal's representation and compare the experimental results with the predictions of two competing hypotheses: infomax, which predicts that information conveyed about the entire signal range should be maximized, and selective coding, which predicts that "foreground" signals should be enhanced while "background" signals should be selectively suppressed. We test how adaptation changes the input-response curve when presenting signals with two or three peaks in their amplitude distributions, for which selective coding and infomax predict conflicting changes. By means of Bayesian data analysis, we quantify the shifts of the measured response curves and also find a slight reduction of their slopes. These decreases in slopes are smaller, and the absolute response thresholds are higher than those predicted by infomax. Most remarkably, and in contrast to the infomax principle, adaptation actually reduces the amount of encoded information when considering the whole range of input signals. The response curve changes are also not consistent with the selective coding hypothesis, because the amount of information conveyed about the loudest part of the signal does not increase as predicted but remains nearly constant. Less information is transmitted about signals with lower intensity.  相似文献   

15.
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds.  相似文献   

16.
Ospeck M 《PloS one》2012,7(3):e32384
Mammalian auditory nerve fibers (ANF) are remarkable for being able to encode a 40 dB, or hundred fold, range of sound pressure levels into their firing rate. Most of the fibers are very sensitive and raise their quiescent spike rate by a small amount for a faint sound at auditory threshold. Then as the sound intensity is increased, they slowly increase their spike rate, with some fibers going up as high as ~300 Hz. In this way mammals are able to combine sensitivity and wide dynamic range. They are also able to discern sounds embedded within background noise. ANF receive efferent feedback, which suggests that the fibers are readjusted according to the background noise in order to maximize the information content of their auditory spike trains. Inner hair cells activate currents in the unmyelinated distal dendrites of ANF where sound intensity is rate-coded into action potentials. We model this spike generator compartment as an attenuator that employs fast negative feedback. Input current induces rapid and proportional leak currents. This way ANF are able to have a linear frequency to input current (f-I) curve that has a wide dynamic range. The ANF spike generator remains very sensitive to threshold currents, but efferent feedback is able to lower its gain in response to noise.  相似文献   

17.
The auditory cortex   总被引:4,自引:0,他引:4  
The division of the auditory cortex into various fields, functional aspects of these fields, and neuronal coding in the primary auditory cortical field (AI) are reviewed with stress on features that may be common to mammals. On the basis of 14 topographies and clustered distributions of neuronal response characteristics in the primary auditory cortical field, a hypothesis is developed of how a certain complex acoustic pattern may be encoded in an equivalent spatial activity pattern in AI, generated by time-coordinated firing of groups of neurons. The auditory cortex, demonstrated specifically for AI, appears to perform sound analysis by synthesis, i.e. by combining spatially distributed coincident or time-coordinated neuronal responses. The dynamics of sounds and the plasticity of cortical responses are considered as a topic for research. Accepted: 25 July 1997  相似文献   

18.
A set of impulsive transient signals has been synthesized for earphone delivery whose waveform and amplitude spectra, measured at the eardrum, mimic those of sounds arriving from a free-field source. The complete stimulus set forms a "virtual acoustic space" (VAS) for the cat. VAS stimuli are delivered via calibrated earphones sealed into the external meatus in cats under barbiturate anesthesia. Neurons recorded extracellularly in primary (AI) auditory cortex exhibit sensitivity to the direction of sound in VAS. The aggregation of effective sound directions forms a virtual space receptive field (VSRF). At about 20 dB above minimal threshold, VSRFs recorded in otherwise quiet and anechoic space fall into categories based on spatial dimension and location. The size, shape and location of VSRFs remain stable over many hours of recording and are found to be shaped by excitatory and inhibitory interactions of activity arriving from the two ears. Within the VSRF response latency and strength vary systematically with stimulus direction. In an ensemble of such neurons these functional gradients provide information about stimulus direction, which closely accounts for a human listener's spatial acuity. Raising stimulus intensity, introducing continuous background noise or presenting a conditioning stimulus all influence the extent of the VSRF but leave intact the gradient structure of the field. These and other findings suggest that such functional gradients in VSRFs of ensembles of AI neurons are instrumental in coding sound direction and robust enough to overcome interference from competing environmental sounds.  相似文献   

19.
Categorical perception is a process by which a continuous stimulus space is partitioned to represent discrete sensory events. Early experience has been shown to shape categorical perception and enlarge cortical representations of experienced stimuli in the sensory cortex. The present study examines the hypothesis that enlargement in cortical stimulus representations is a mechanism of categorical perception. Perceptual discrimination and identification behaviors were analyzed in model auditory cortices that incorporated sound exposure-induced plasticity effects. The model auditory cortex with over-representations of specific stimuli exhibited categorical perception behaviors for those specific stimuli. These results indicate that enlarged stimulus representations in the sensory cortex may be a mechanism for categorical perceptual learning.  相似文献   

20.
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号