首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The processing of species-specific communication signals in the auditory system represents an important aspect of animal behavior and is crucial for its social interactions, reproduction, and survival. In this article the neuronal mechanisms underlying the processing of communication signals in the higher centers of the auditory system--inferior colliculus (IC), medial geniculate body (MGB) and auditory cortex (AC)--are reviewed, with particular attention to the guinea pig. The selectivity of neuronal responses for individual calls in these auditory centers in the guinea pig is usually low--most neurons respond to calls as well as to artificial sounds; the coding of complex sounds in the central auditory nuclei is apparently based on the representation of temporal and spectral features of acoustical stimuli in neural networks. Neuronal response patterns in the IC reliably match the sound envelope for calls characterized by one or more short impulses, but do not exactly fit the envelope for long calls. Also, the main spectral peaks are represented by neuronal firing rates in the IC. In comparison to the IC, response patterns in the MGB and AC demonstrate a less precise representation of the sound envelope, especially in the case of longer calls. The spectral representation is worse in the case of low-frequency calls, but not in the case of broad-band calls. The emotional content of the call may influence neuronal responses in the auditory pathway, which can be demonstrated by stimulation with time-reversed calls or by measurements performed under different levels of anesthesia. The investigation of the principles of the neural coding of species-specific vocalizations offers some keys for understanding the neural mechanisms underlying human speech perception.  相似文献   

2.
Reduction of information redundancy in the ascending auditory pathway   总被引:2,自引:0,他引:2  
Information processing by a sensory system is reflected in the changes in stimulus representation along its successive processing stages. We measured information content and stimulus-induced redundancy in the neural responses to a set of natural sounds in three successive stations of the auditory pathway-inferior colliculus (IC), auditory thalamus (MGB), and primary auditory cortex (A1). Information about stimulus identity was somewhat reduced in single A1 and MGB neurons relative to single IC neurons, when information is measured using spike counts, latency, or temporal spiking patterns. However, most of this difference was due to differences in firing rates. On the other hand, IC neurons were substantially more redundant than A1 and MGB neurons. IC redundancy was largely related to frequency selectivity. Redundancy reduction may be a generic organization principle of neural systems, allowing for easier readout of the identity of complex stimuli in A1 relative to IC.  相似文献   

3.
For humans and animals, the ability to discriminate speech and conspecific vocalizations is an important physiological assignment of the auditory system. To reveal the underlying neural mechanism, many electrophysiological studies have investigated the neural responses of the auditory cortex to conspecific vocalizations in monkeys. The data suggest that vocalizations may be hierarchically processed along an anterior/ventral stream from the primary auditory cortex (A1) to the ventral prefrontal cortex. To date, the organization of vocalization processing has not been well investigated in the auditory cortex of other mammals. In this study, we examined the spike activities of single neurons in two early auditory cortical regions with different anteroposterior locations: anterior auditory field (AAF) and posterior auditory field (PAF) in awake cats, as the animals were passively listening to forward and backward conspecific calls (meows) and human vowels. We found that the neural response patterns in PAF were more complex and had longer latency than those in AAF. The selectivity for different vocalizations based on the mean firing rate was low in both AAF and PAF, and not significantly different between them; however, more vocalization information was transmitted when the temporal response profiles were considered, and the maximum transmitted information by PAF neurons was higher than that by AAF neurons. Discrimination accuracy based on the activities of an ensemble of PAF neurons was also better than that of AAF neurons. Our results suggest that AAF and PAF are similar with regard to which vocalizations they represent but differ in the way they represent these vocalizations, and there may be a complex processing stream between them.  相似文献   

4.
Calculation of numerical density of neurons in ventral part of cat's medial geniculate body (vMGB) was made. It was shown that 1 mm3 of vMGB tissue contains 29,460 neurons. After 6 months from unilateral removal of the auditory cortex the quantity of large (supposedly thalamocortical) neurons in ipsilateral vMGB reduced on average by 78.1%, but of small ones--only by 10.7%.  相似文献   

5.

Background

Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the ‘song system’ is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird''s own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium.

Methods and Findings

Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b.

Conclusions

Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream.  相似文献   

6.
Distributed coding of sound locations in the auditory cortex   总被引:3,自引:0,他引:3  
Although the auditory cortex plays an important role in sound localization, that role is not well understood. In this paper, we examine the nature of spatial representation within the auditory cortex, focusing on three questions. First, are sound-source locations encoded by individual sharply tuned neurons or by activity distributed across larger neuronal populations? Second, do temporal features of neural responses carry information about sound-source location? Third, are any fields of the auditory cortex specialized for spatial processing? We present a brief review of recent work relevant to these questions along with the results of our investigations of spatial sensitivity in cat auditory cortex. Together, they strongly suggest that space is represented in a distributed manner, that response timing (notably first-spike latency) is a critical information-bearing feature of cortical responses, and that neurons in various cortical fields differ in both their degree of spatial sensitivity and their manner of spatial coding. The posterior auditory field (PAF), in particular, is well suited for the distributed coding of space and encodes sound-source locations partly by modulations of response latency. Studies of neurons recorded simultaneously from PAF and/or A1 reveal that spatial information can be decoded from the relative spike times of pairs of neurons - particularly when responses are compared between the two fields - thus partially compensating for the absence of an absolute reference to stimulus onset.  相似文献   

7.

Background

A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.

Methodology/Principal Findings

We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.

Conclusions/Significance

These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.  相似文献   

8.
Neurons in the auditory cortex are believed to utilize temporal patterns of neural activity to accurately process auditory information but the intrinsic neuronal mechanism underlying the control of auditory neural activity is not known. The slowly activating, persistent K+ channel, also called M-channel that belongs to the Kv7 family, is already known to be important in regulating subthreshold neural excitability and synaptic summation in neocortical and hippocampal pyramidal neurons. However, its functional role in the primary auditory cortex (A1) has never been characterized. In this study, we investigated the roles of M-channels on neuronal excitability, short-term plasticity, and synaptic summation of A1 layer 2/3 regular spiking pyramidal neurons with whole-cell current-clamp recordings in vitro. We found that blocking M-channels with a selective M-channel blocker, XE991, significantly increased neural excitability of A1 layer 2/3 pyramidal neurons. Furthermore, M-channels controled synaptic responses of intralaminar-evoked excitatory postsynaptic potentials (EPSPs); XE991 significantly increased EPSP amplitude, decreased the rate of short-term depression, and increased the synaptic summation. These results suggest that M-channels are involved in controlling spike output patterns and synaptic responses of A1 layer 2/3 pyramidal neurons, which would have important implications in auditory information processing.  相似文献   

9.
The activity of 194 neurons was recorded in three subdivisions of the medial geniculate body (74 neurons in the ventral, 62 in the medial and 44 neurons in the dorsal subdivision, i.e. vMGB, mMGB and dMGB) of guinea pigs anesthetized with ketamine-xylazine. The discharge properties of neurons were evaluated by means of peristimulus time histograms (PSTHs), interval histograms (INTHs) and auto-correlograms (ACGs). In the whole MGB, the most frequent PSTH responses to pure tone stimuli were onset (43%) or chopper (32%). The onset responses were mostly present in the vMGB, whereas chopper responses dominated in the dMGB. In the whole MGB Poisson-like and bimodal INTHs were found in 46% and 40% of neurons, respectively. The mMGB revealed fewer bimodal and more symmetrical types of INTH. In the whole MGB, 60% of units were found to have ACGs typical for short bursts (<100 ms), 23% for long bursts (>100 ms) and 15% of units fired without bursts. Neurons in the vMGB were characterized by short bursting, whereas those in the mMGB and dMGB expressed more activity in the long bursts. The results demonstrate that the type of information processing in the vMGB, which belongs to the "primary" auditory system, is different from that in two other subdivisions of the MGB.  相似文献   

10.
We investigated the representation of four typical guinea pig vocalizations in the auditory cortex (AI) in anesthetized guinea pigs with the aim to compare cortical data to the data already published for identical calls in subcortical structures - the inferior colliculus (IC) and medial geniculate body (MGB). Like the subcortical neurons also cortical neurons typically responded to many calls with a time-locked response to one or more temporal elements of the calls. The neuronal response patterns in the AI correlated well with the sound temporal envelope of chirp (an isolated short phrase), but correlated less well in the case of chutter and whistle (longer calls) or purr (a call with a fast repetition rate of phrases). Neuronal rate vs. characteristic frequency profiles provided only a coarse representation of the calls’ frequency spectra. A comparison between the activity in the AI and those of subcortical structures showed a different transformation of the neuronal response patterns from the IC to the AI for individual calls: i) while the temporal representation of chirp remained unchanged, the representations of whistle and chutter were transformed at the thalamic level and the response to purr at the cortical level; ii) for the wideband calls (whistle, chirp) the rate representation of the call spectra was preserved in the AI and MGB at the level present in the IC, while in the case of low-frequency calls (chutter, purr), the representation was less precise in the AI and MGB than in the IC; iii) the difference in the response strength to natural and time-reversed whistle was found to be smaller in the AI than in the IC or MGB.  相似文献   

11.
Songbirds rely on auditory processing of natural communication signals for a number of social behaviors,including mate selection,individual recognition and the rare behavior of vocal learning - the ability to learn vocalizations through imitation of an adult model,rather than by instinct.Like mammals,songbirds possess a set of interconnected ascending and descending auditory brain pathways that process acoustic information and that are presumably involved in the perceptual processing of vocal communication signals.Most auditory areas studied to date are located in the caudomedial forebrain of the songbird and include the thalamo-recipient field L (sub fields L1,L2 and L3),the caudomedial and caudolateral mesopallium (CMM and CLM,respectively) and the caudomedial nidopallium (NCM). This review focuses on NCM,an auditory area previously proposed to be analogous to parts of the primary auditory cortex in mammals.Stimulation of songbirds with auditory stimuli drives vigorous electrophysiological responses and the expression of several activity-regulated genes in NCM.Interestingly,NCM neurons are tuned to species-specific songs and undergo some forms of experience-dependent plasticity in-vivo .These activity-dependent changes may underlie long-term modifications in the functional performance of NCM and constitute a potential neural substrate for auditory discrimination.We end this review by discussing evidence that suggests that NCM may be a site of auditory memory formation and/or storage.  相似文献   

12.
While the neural circuitry and physiology of the auditory system is well studied among vertebrates, far less is known about how the auditory system interacts with other neural substrates to mediate behavioral responses to social acoustic signals. One species that has been the subject of intensive neuroethological investigation with regard to the production and perception of social acoustic signals is the plainfin midshipman fish, Porichthys notatus, in part because acoustic communication is essential to their reproductive behavior. Nesting male midshipman vocally court females by producing a long duration advertisement call. Females localize males by their advertisement call, spawn and deposit all their eggs in their mate’s nest. As multiple courting males establish nests in close proximity to one another, the perception of another male’s call may modulate individual calling behavior in competition for females. We tested the hypothesis that nesting males exposed to advertisement calls of other males would show elevated neural activity in auditory and vocal-acoustic brain centers as well as differential activation of catecholaminergic neurons compared to males exposed only to ambient noise. Experimental brains were then double labeled by immunofluorescence (-ir) for tyrosine hydroxylase (TH), an enzyme necessary for catecholamine synthesis, and cFos, an immediate-early gene product used as a marker for neural activation. Males exposed to other advertisement calls showed a significantly greater percentage of TH-ir cells colocalized with cFos-ir in the noradrenergic locus coeruleus and the dopaminergic periventricular posterior tuberculum, as well as increased numbers of cFos-ir neurons in several levels of the auditory and vocal-acoustic pathway. Increased activation of catecholaminergic neurons may serve to coordinate appropriate behavioral responses to male competitors. Additionally, these results implicate a role for specific catecholaminergic neuronal groups in auditory-driven social behavior in fishes, consistent with a conserved function in social acoustic behavior across vertebrates.  相似文献   

13.

Background

Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication.

Methodology/Principal Findings

We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata) pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s) matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area.

Conclusions/Significance

Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.  相似文献   

14.
The place theory proposed by Jeffress (1948) is still the dominant model of how the brain represents the movement of sensory stimuli between sensory receptors. According to the place theory, delays in signalling between neurons, dependent on the distances between them, compensate for time differences in the stimulation of sensory receptors. Hence the location of neurons, activated by the coincident arrival of multiple signals, reports the stimulus movement velocity. Despite its generality, most evidence for the place theory has been provided by studies of the auditory system of auditory specialists like the barn owl, but in the study of mammalian auditory systems the evidence is inconclusive. We ask to what extent the somatosensory systems of tactile specialists like rats and mice use distance dependent delays between neurons to compute the motion of tactile stimuli between the facial whiskers (or 'vibrissae'). We present a model in which synaptic inputs evoked by whisker deflections arrive at neurons in layer 2/3 (L2/3) somatosensory 'barrel' cortex at different times. The timing of synaptic inputs to each neuron depends on its location relative to sources of input in layer 4 (L4) that represent stimulation of each whisker. Constrained by the geometry and timing of projections from L4 to L2/3, the model can account for a range of experimentally measured responses to two-whisker stimuli. Consistent with that data, responses of model neurons located between the barrels to paired stimulation of two whiskers are greater than the sum of the responses to either whisker input alone. The model predicts that for neurons located closer to either barrel these supralinear responses are tuned for longer inter-whisker stimulation intervals, yielding a topographic map for the inter-whisker deflection interval across the surface of L2/3. This map constitutes a neural place code for the relative timing of sensory stimuli.  相似文献   

15.
Communication signals are important for social interactions and survival and are thought to receive specialized processing in the visual and auditory systems. Whereas the neural processing of faces by face clusters and face cells has been repeatedly studied [1-5], less is known about the neural representation of voice content. Recent functional magnetic resonance imaging (fMRI) studies have localized voice-preferring regions in the primate temporal lobe [6, 7], but the hemodynamic response cannot directly assess neurophysiological properties. We investigated the responses of neurons in an fMRI-identified voice cluster in awake monkeys, and here we provide the first systematic evidence for voice cells. "Voice cells" were identified, in analogy to "face cells," as neurons responding at least 2-fold stronger to conspecific voices than to "nonvoice" sounds or heterospecific voices. Importantly, whereas face clusters are thought to contain high proportions of face cells [4] responding broadly to many faces [1, 2, 4, 5, 8-10], we found that voice clusters contain moderate proportions of voice cells. Furthermore, individual voice cells exhibit high stimulus selectivity. The results reveal the neurophysiological bases for fMRI-defined voice clusters in the primate brain and highlight potential differences in how the auditory and?visual systems generate selective representations of communication signals.  相似文献   

16.
Female choice plays a critical role in the evolution of male acoustic displays. Yet there is limited information on the neurophysiological basis of female songbirds’ auditory recognition systems. To understand the neural mechanisms of how non-singing female songbirds perceive behaviorally relevant vocalizations, we recorded responses of single neurons to acoustic stimuli in two auditory forebrain regions, the caudal lateral mesopallium (CLM) and Field L, in anesthetized adult female zebra finches (Taeniopygia guttata). Using various metrics of response selectivity, we found consistently higher response strengths for unfamiliar conspecific songs compared to tone pips and white noise in Field L but not in CLM. We also found that neurons in the left auditory forebrain had lower response strengths to synthetics sounds, leading to overall higher neural selectivity for song in neurons of the left hemisphere. This laterality effect is consistent with previously published behavioral data in zebra finches. Overall, our results from Field L are in parallel and from CLM are in contrast with the patterns of response selectivity reported for conspecific songs over synthetic sounds in male zebra finches, suggesting some degree of sexual dimorphism of auditory perception mechanisms in songbirds.  相似文献   

17.
Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.  相似文献   

18.
Species-specific vocalizations in mice have frequency-modulated (FM) components slower than the lower limit of FM direction selectivity in the core region of the mouse auditory cortex. To identify cortical areas selective to slow frequency modulation, we investigated tonal responses in the mouse auditory cortex using transcranial flavoprotein fluorescence imaging. For differentiating responses to frequency modulation from those to stimuli at constant frequencies, we focused on transient fluorescence changes after direction reversal of temporally repeated and superimposed FM sweeps. We found that the ultrasonic field (UF) in the belt cortical region selectively responded to the direction reversal. The dorsoposterior field (DP) also responded weakly to the reversal. Regarding the responses in UF, no apparent tonotopic map was found, and the right UF responses were significantly larger in amplitude than the left UF responses. The half-max latency in responses to FM sweeps was shorter in UF compared with that in the primary auditory cortex (A1) or anterior auditory field (AAF). Tracer injection experiments in the functionally identified UF and DP confirmed that these two areas receive afferent inputs from the dorsal part of the medial geniculate nucleus (MG). Calcium imaging of UF neurons stained with fura-2 were performed using a two-photon microscope, and the presence of UF neurons that were selective to both direction and direction reversal of slow frequency modulation was demonstrated. These results strongly suggest a role for UF, and possibly DP, as cortical areas specialized for processing slow frequency modulation in mice.  相似文献   

19.
Liu X  Yan Y  Wang Y  Yan J 《PloS one》2010,5(11):e14038

Background

Cortical neurons implement a high frequency-specific modulation of subcortical nuclei that includes the cochlear nucleus. Anatomical studies show that corticofugal fibers terminating in the auditory thalamus and midbrain are mostly ipsilateral. Differently, corticofugal fibers terminating in the cochlear nucleus are bilateral, which fits to the needs of binaural hearing that improves hearing quality. This leads to our hypothesis that corticofugal modulation of initial neural processing of sound information from the contralateral and ipsilateral ears could be equivalent or coordinated at the first sound processing level.

Methodology/Principal Findings

With the focal electrical stimulation of the auditory cortex and single unit recording, this study examined corticofugal modulation of the ipsilateral cochlear nucleus. The same methods and procedures as described in our previous study of corticofugal modulation of contralateral cochlear nucleus were employed simply for comparison. We found that focal electrical stimulation of cortical neurons induced substantial changes in the response magnitude, response latency and receptive field of ipsilateral cochlear nucleus neurons. Cortical stimulation facilitated auditory response and shortened the response latency of physiologically matched neurons whereas it inhibited auditory response and lengthened the response latency of unmatched neurons. Finally, cortical stimulation shifted the best frequencies of cochlear neurons towards those of stimulated cortical neurons.

Conclusion

Our data suggest that cortical neurons enable a high frequency-specific remodelling of sound information processing in the ipsilateral cochlear nucleus in the same manner as that in the contralateral cochlear nucleus.  相似文献   

20.
Processing of information in the cerebral cortex of primates is characterized by distributed representations and processing in neuronal assemblies rather than by detector neurons, cardinal cells or command neurons. Responses of individual neurons in sensory cortical areas contain limited and ambiguous information on common features of the natural environment which is disambiguated by comparison with the responses of other, related neurons. Distributed representations are also capable to represent the enormous complexity and variability of the natural environment by the large number of possible combinations of neurons that can engage in the representation of a stimulus or other content. A critical problem of distributed representation and processing is the superposition of several assemblies activated at the same time since interpretation and processing of a population code requires that the responses related to a single representation can be identified and distinguished from other, related activity. A possible mechanism which tags related responses is the synchronization of neuronal responses of the same assembly with a precision in the millisecond range. This mechanism also supports the separate processing of distributed activity and dynamic assembly formation. Experimental evidence from electrophysiological investigations of non-human primates and human subjects shows that synchronous activity can be found in visual, auditory and motor areas of the cortex. Simultaneous recordings of neurons in the visual cortex indicate that individual neurons synchronize their activity with each other, if they respond to the same stimulus but not if they are part of different assemblies representing different contents. Furthermore, evidence for synchronous activity related to perception, expectation, memory, and attention has been observed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号