首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Pitch perception is crucial for vocal communication, music perception, and auditory object processing in a complex acoustic environment. How pitch is represented in the cerebral cortex has for a long time remained an unanswered question in auditory neuroscience. Several lines of evidence now point to a distinct non-primary region of auditory cortex in primates that contains a cortical representation of pitch.  相似文献   

2.
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.  相似文献   

3.
The cochleotopic organization of the primary auditory cortex was studied by the evoked potentials method in cats anesthetized with pentobarbital. Two foci of maximal activity (dorsal and ventral) were found in the primary auditory cortex of 85% of animals during local electrical stimulation of different areas of the cochlea. Analysis of projection maps of the primary auditory cortex of the cats showed that different areas of the cochlea are presented in this region disproportionately. The basal portion projects to a larger cortical surface than the middle and apical portions together, evidence of inequality of representation of different parts of the receptor apparatus of the cochlea in the primary auditory area. Considerable differences were observed in the arrangement of projections of the cochlea in the primary auditory cortex of different animals.A. A. Bogomolets Institute of Physiology, Academy of Sciences of the Ukrainian SSR, Kiev. Translated from Neirofiziologiya, Vol. 11, No. 2, pp. 117–124, March–April, 1979.  相似文献   

4.
Luo H  Poeppel D 《Neuron》2007,54(6):1001-1010
How natural speech is represented in the auditory cortex constitutes a major challenge for cognitive neuroscience. Although many single-unit and neuroimaging studies have yielded valuable insights about the processing of speech and matched complex sounds, the mechanisms underlying the analysis of speech dynamics in human auditory cortex remain largely unknown. Here, we show that the phase pattern of theta band (4-8 Hz) responses recorded from human auditory cortex with magnetoencephalography (MEG) reliably tracks and discriminates spoken sentences and that this discrimination ability is correlated with speech intelligibility. The findings suggest that an approximately 200 ms temporal window (period of theta oscillation) segments the incoming speech signal, resetting and sliding to track speech dynamics. This hypothesized mechanism for cortical speech analysis is based on the stimulus-induced modulation of inherent cortical rhythms and provides further evidence implicating the syllable as a computational primitive for the representation of spoken language.  相似文献   

5.
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.  相似文献   

6.
Sparse representation of sounds in the unanesthetized auditory cortex   总被引:2,自引:0,他引:2  
How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli) in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second) in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second). At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.  相似文献   

7.
The cochleotopic organization of the second auditory cortical area was investigated in cats anesthetized with pentobarbital by the evoked potentials method. Two independent representations of the cochlea were shown to exist in area AII: One in the dorsocaudal portion, the other in its ventrorostral portion. These projections of the cochlea differ in size and in the order of representation of its different parts. The dorsocaudal part of the auditory projection area of the cochlea, which extends over a distance of 2.6–2.8 mm from the center of the basal to the center of the apical focus, is arc-shaped. The order of arrangement of projections of different parts of the cochlea in this region of the auditory cortex coincides with that in the first auditory area, whereas the projection of the cochlea in the ventrorostral part of area AII, the length of which is 1.4–1.6 mm, has the opposite order of representation. The localization of projections of the cochlea in different cats shows considerable variability not only as regards anatomical topography of the auditory cortex, but also from one animal to another. The basal region of the cochlea was shown to project to a larger area of the cortex than the middle and apical portions taken together. It is suggested that the basal turn of the cochlea is functionally the most important for perception and primary analysis of auditory information.A. A. Bogomolets Institute of Physiology, Academy of Sciences of the Ukrainian SSR, Kiev. Translated from Neirofiziologiya, Vol. 12, No. 1, pp. 18–27, January–February, 1980.  相似文献   

8.
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.  相似文献   

9.
Neurons in sensory cortices are often assumed to be feature detectors, computing simple and then successively more complex features out of the incoming sensory stream. These features are somehow integrated into percepts. Despite many years of research, a convincing candidate for such a feature in primary auditory cortex has not been found. We argue that feature detection is actually a secondary issue in understanding the role of primary auditory cortex. Instead, the major contribution of primary auditory cortex to auditory perception is in processing previously derived features on a number of different timescales. We hypothesize that, as a result, neurons in primary auditory cortex represent sounds in terms of auditory objects rather than in terms of feature maps. According to this hypothesis, primary auditory cortex has a pivotal role in the auditory system in that it generates the representation of auditory objects to which higher auditory centers assign properties such as spatial location, source identity, and meaning.Abbreviations A1 primary auditory cortex - MGB medical geniculate body - IC inferior coliculus - STRF spectrotemporal receptive field  相似文献   

10.
We examined the auditory response properties of neurons in the medial geniculate body of unanesthetized little brown bats (Myotis lucifugus). The units' selectivities to stimulus frequency, amplitude and duration were not significantly different from those of neurons in the inferior colliculus (Condon et al. 1994), which provides the primary excitatory input to the medial geniculate body, or in the auditory cortex (Condon et al. 1997) which receives primary input from the medial geniculate body. However, in response to trains of unmodulated tone pulses, the upper cutoff frequency for time-locked discharges (64 ± 46.9 pulses per second or pps) and the mean number of spikes per pulse (19.2 ± 12.2 pps), were intermediate to those for the inferior colliculus and auditory cortex. Further, in response to amplitude-modulated pulse trains, medial geniculate body units displayed a degree of response facilitation that was intermediate to that of the inferior colliculus and auditory cortex inferior colliculus: 1.32 ± 0.33; medial geniculate body: 1.75 ± 0.26; auditory cortex: 2.52 ± 0.96, P < 0.01). These data suggest that the representation of isolated tone pulses is not significantly altered along the colliculo-thalamo-cortical axis, but that the fidelity of representation of temporally patterned signals progressively degrades along this axis. The degradation in response fidelity allows the system to better extract the salient feature in complex amplitude-modulated signals. Accepted: 9 January 1999  相似文献   

11.
Recent data on learning-related changes in animal and human auditory cortex indicate functions beyond mere stimulus representation and simple recognition memory for stimuli. Rather, auditory cortex seems to process and represent stimuli in a task-dependent fashion. This implies plasticity in neural processing, which can be observed at the level of single neuron firing and the level of spatiotemporal activity patterns in cortical areas. Auditory cortex is a structure in which behaviorally relevant aspects of stimulus processing are highly developed because of the fugitive nature of auditory stimuli.  相似文献   

12.
Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages—the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.

A study of the neuronal representations elicited in guinea pigs by conspecific calls at different auditory processing stages reveals insights into where call-selective neuronal responses emerge; the transformation from nonselective to call-selective responses occurs in the superficial layers of the primary auditory cortex.  相似文献   

13.
Speech is the most interesting and one of the most complex sounds dealt with by the auditory system. The neural representation of speech needs to capture those features of the signal on which the brain depends in language communication. Here we describe the representation of speech in the auditory nerve and in a few sites in the central nervous system from the perspective of the neural coding of important aspects of the signal. The representation is tonotopic, meaning that the speech signal is decomposed by frequency and different frequency components are represented in different populations of neurons. Essential to the representation are the properties of frequency tuning and nonlinear suppression. Tuning creates the decomposition of the signal by frequency, and nonlinear suppression is essential for maintaining the representation across sound levels. The representation changes in central auditory neurons by becoming more robust against changes in stimulus intensity and more transient. However, it is probable that the form of the representation at the auditory cortex is fundamentally different from that at lower levels, in that stimulus features other than the distribution of energy across frequency are analysed.  相似文献   

14.
Summary Tonotopical organization and frequency representation in the auditory cortex of Greater Horseshoe Bats was studied using multi-unit recordings.The auditory responsive cortical area can be divided into a primary and a secondary region on the basis of response characteristics forming a core/belt structure.In the primary area units with best frequencies in the range of echolocation signals are strongly overrepresented (Figs. 6–8). There are two separate large areas concerned with the processing of the two components of the echolocation signals. In one area frequencies between the individual resting frequency and about 2 kHz above are represented, which normally occur in the constant frequency (CF) part of the echoes (CF-area), in a second one best frequencies between resting frequency and about 8 kHz below are found (FM-area).In the CF-area tonotopical organization differs from the usual mammalian scheme of dorso-ventral isofrequency slabs. Here isofrequency contours are arranged in a semicircular pattern.The representation of the cochlear partition (cochleotopic organization) was calculated. In the inferior colliculus and auditory cortex there is a disproportionate representation of the basilar membrane. This finding is in contradiction to the current opinion that frequency representation in the auditory system of Horseshoe Bats is only determined by the mechanical tuning properties of the basilar membrane.Response characteristics for single units were studied using pure tone stimuli. Most units showed transient responses. In 25% of units response characteristics depended on the combination of frequency and sound pressure level used.Frequency selectivity of units with best frequencies in the range of echolocation sounds is very high. Q-10dB values of up to 400 were found in a small frequency band just above resting frequency.Abbreviations BF best frequency - CF constant frequency - FM frequency modulated - MT minimal threshold  相似文献   

15.
Anatomical studies propose that the primate auditory cortex contains more fields than have actually been functionally confirmed or described. Spatially resolved functional magnetic resonance imaging (fMRI) with carefully designed acoustical stimulation could be ideally suited to extend our understanding of the processing within these fields. However, after numerous experiments in humans, many auditory fields remain poorly characterized. Imaging the macaque monkey is of particular interest as these species have a richer set of anatomical and neurophysiological data to clarify the source of the imaged activity. We functionally mapped the auditory cortex of behaving and of anesthetized macaque monkeys with high resolution fMRI. By optimizing our imaging and stimulation procedures, we obtained robust activity throughout auditory cortex using tonal and band-passed noise sounds. Then, by varying the frequency content of the sounds, spatially specific activity patterns were observed over this region. As a result, the activity patterns could be assigned to many auditory cortical fields, including those whose functional properties were previously undescribed. The results provide an extensive functional tessellation of the macaque auditory cortex and suggest that 11 fields contain neurons tuned for the frequency of sounds. This study provides functional support for a model where three fields in primary auditory cortex are surrounded by eight neighboring “belt” fields in non-primary auditory cortex. The findings can now guide neurophysiological recordings in the monkey to expand our understanding of the processing within these fields. Additionally, this work will improve fMRI investigations of the human auditory cortex.  相似文献   

16.
Distributed coding of sound locations in the auditory cortex   总被引:3,自引:0,他引:3  
Although the auditory cortex plays an important role in sound localization, that role is not well understood. In this paper, we examine the nature of spatial representation within the auditory cortex, focusing on three questions. First, are sound-source locations encoded by individual sharply tuned neurons or by activity distributed across larger neuronal populations? Second, do temporal features of neural responses carry information about sound-source location? Third, are any fields of the auditory cortex specialized for spatial processing? We present a brief review of recent work relevant to these questions along with the results of our investigations of spatial sensitivity in cat auditory cortex. Together, they strongly suggest that space is represented in a distributed manner, that response timing (notably first-spike latency) is a critical information-bearing feature of cortical responses, and that neurons in various cortical fields differ in both their degree of spatial sensitivity and their manner of spatial coding. The posterior auditory field (PAF), in particular, is well suited for the distributed coding of space and encodes sound-source locations partly by modulations of response latency. Studies of neurons recorded simultaneously from PAF and/or A1 reveal that spatial information can be decoded from the relative spike times of pairs of neurons - particularly when responses are compared between the two fields - thus partially compensating for the absence of an absolute reference to stimulus onset.  相似文献   

17.
In stimulus-response-outcome learning, different regions in the cortico-basal ganglia network are progressively involved according to the stage of learning. However, the involvement of sensory cortex remains ellusive even though massive cortical projections to the striatum imply its significant role in this learning. Here we show that the global tonotopic representation in the auditory cortex changed progressively depending on the stage of training in auditory operant conditioning. At the early stage, tone-responsive areas mainly in the core cortex expanded, while both the core and belt cortices shrank at the late stage as behavior became conditioned. Taken together with previous findings, this progressive global plasticity from the core to belt cortices suggests differentiated roles in these areas: the core cortex serves as a filter to better identify auditory objects for hierarchical computation within the belt cortex, while the belt stores auditory objects and affects decision making through direct projections to limbic system and higher association cortex. Thus, the progressive plasticity in the present study reflects a shift from identification to storage of a behaviorally relevant auditory object, which is potentially associated with a habitual behavior.  相似文献   

18.
1. Frequency and space representation in the auditory cortex of the big brown bat, Eptesicus fuscus, were studied by recording responses of 223 neurons to acoustic stimuli presented in the bat's frontal auditory space. 2. The majority of the auditory cortical neurons were recorded at a depth of less than 500 microns with a response latency between 8 and 20 ms. They generally discharged phasically and had nonmonotonic intensity-rate functions. The minimum threshold, (MT) of these neurons was between 8 and 82 dB sound pressure level (SPL). Half of the cortical neurons showed spontaneous activity. All 55 threshold curves are V-shaped and can be described as broad, intermediate, or narrow. 3. Auditory cortical neurons are tonotopically organized along the anteroposterior axis of the auditory cortex. High-frequency-sensitive neurons are located anteriorly and low-frequency-sensitive neurons posteriorly. An overwhelming majority of neurons were sensitive to a frequency range between 30 and 75 kHz. 4. When a sound was delivered from the response center of a neuron on the bat's frontal auditory space, the neuron had its lowest MT. When the stimulus amplitude was increased above the MT, the neuron responded to sound delivered within a defined spatial area. The response center was not always at the geometric center of the spatial response area. The latter also expanded with stimulus amplitude. High-frequency-sensitive neurons tended to have smaller spatial response areas than low-frequency-sensitive neurons. 5. Response centers of all 223 neurons were located between 0 degrees and 50 degrees in azimuth, 2 degrees up and 25 degrees down in elevation of the contralateral frontal auditory space. Response centers of auditory cortical neurons tended to move toward the midline and slightly downward with increasing best frequency. 6. Auditory space representation appears to be systematically arranged according to the tonotopic axis of the auditory cortex. Thus, the lateral space is represented posteriorly and the middle space anteriorly. Space representation, however, is less systematic in the vertical direction. 7. Auditory cortical neurons are columnarly organized. Thus, the BFs, MTs, threshold curves, azimuthal location of response centers, and auditory spatial response areas of neurons sequentially isolated from an orthogonal electrode penetration are similar.  相似文献   

19.
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.  相似文献   

20.
Several types and subtypes of vocalizations which have a behavioral impact on degu pups were identified. Among these the complex “mothering call” which is exclusively uttered by females and first during extensive nursing periods in the nest is a candidate for filial learning. In 14C-2-fluoro-2-deoxyglucose (FDG) experiments two-weeks-old pups raised by normal mothers showed higher metabolic activity in somatosensory frontoparietal and frontal cortex upon play back of a mothering call than pups raised by muted mothers. It is suggested that pups learn to associate the mothering call with close body contact with their mother early in life. In addition, FDG representation of the call, of its components and of tone and noise stimuli were studied in degu auditory cortex. Five fields and some aspects of tonotopic organization were identified. The mothering call activated all fields, but with more spatial extent of labeling in normally raised pups. A rostral field was activated by play-back of the mothering call, noise, and two-tone sequences, but hardly by single-frequency tones and the narrow-band component of the mothering call. Accepted: 13 August 1997  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号