首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
For humans and animals, the ability to discriminate speech and conspecific vocalizations is an important physiological assignment of the auditory system. To reveal the underlying neural mechanism, many electrophysiological studies have investigated the neural responses of the auditory cortex to conspecific vocalizations in monkeys. The data suggest that vocalizations may be hierarchically processed along an anterior/ventral stream from the primary auditory cortex (A1) to the ventral prefrontal cortex. To date, the organization of vocalization processing has not been well investigated in the auditory cortex of other mammals. In this study, we examined the spike activities of single neurons in two early auditory cortical regions with different anteroposterior locations: anterior auditory field (AAF) and posterior auditory field (PAF) in awake cats, as the animals were passively listening to forward and backward conspecific calls (meows) and human vowels. We found that the neural response patterns in PAF were more complex and had longer latency than those in AAF. The selectivity for different vocalizations based on the mean firing rate was low in both AAF and PAF, and not significantly different between them; however, more vocalization information was transmitted when the temporal response profiles were considered, and the maximum transmitted information by PAF neurons was higher than that by AAF neurons. Discrimination accuracy based on the activities of an ensemble of PAF neurons was also better than that of AAF neurons. Our results suggest that AAF and PAF are similar with regard to which vocalizations they represent but differ in the way they represent these vocalizations, and there may be a complex processing stream between them.  相似文献   

2.
In sensory physiology, various System Identification methods are implemented to formalized stimulus-response relationships. We applied the Volterra approach for characterizing input-output relationships of cells in the medial geniculate body (MGB) of an awake squirrel monkey. Intraspecific communication calls comprised the inputs and the corresponding cellular evoked responses—the outputs. A set of vocalization was used to calculate the kernels of the transformation, and these kernels subserved to predict the responses of the cell to a different set of vocalizations. It was found that it is possible to predict the response (PSTH) of MGB cells to natural vocalizations, based on envelopes of the spectral components of the vocalization. Some of the responses could be predicted by assuming a linear transformation function, whereas other responses could be predicted by non-linear (second order) kernels. These two modes of transformation, which are also reflected by a distinct spatial distribution of the linearvis-à-vis non-linear responding cells, apparently representa new revelation of parallel processing of auditory information.  相似文献   

3.
Research strategy in the auditory system has tended to parallel that in the visual system, where neurons have been shown to respond selectively to specific stimulus parameters. Auditory neurons have been shown to be sensitive to changes in acoustic parameters, but only rarely have neurons been reported that respond exclusively to only one biologically significant sound. Even at higher levels of the auditory system very few cells have been found that could be described as "vocalization detectors." In addition, variability in responses to artificial sounds have been reported for auditory cortical neurons similar to the response variability that has been reported in the visual system. Recent evidence indicates that the responses of auditory cortical neurons to species-specific vocalizations can also be labile, varying in both strength and selectivity. This is especially true of the secondary auditory cortex. This variability, coupled with the lack of extreme specificity in the secondary auditory cortex, suggests that secondary cortical neurons are not well suited for the role of "vocalization detectors."  相似文献   

4.
The function of ultrasonic vocalizations (USVs) produced by mice (Mus musculus) is a topic of broad interest to many researchers. These USVs differ widely in spectrotemporal characteristics, suggesting different categories of vocalizations, although this has never been behaviorally demonstrated. Although electrophysiological studies indicate that neurons can discriminate among vocalizations at the level of the auditory midbrain, perceptual acuity for vocalizations has yet to be determined. Here, we trained CBA/CaJ mice using operant conditioning to discriminate between different vocalizations and between a spectrotemporally modified vocalization and its original version. Mice were able to discriminate between vocalization types and between manipulated vocalizations, with performance negatively correlating with spectrotemporal similarity. That is, discrimination performance was higher for dissimilar vocalizations and much lower for similar vocalizations. The behavioral data match previous neurophysiological results in the inferior colliculus (IC), using the same stimuli. These findings suggest that the different vocalizations could carry different meanings for the mice. Furthermore, the finding that behavioral discrimination matched neural discrimination in the IC suggests that the IC plays an important role in the perceptual discrimination of vocalizations.  相似文献   

5.
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction.

Voice perception occurs via specialized networks in higher order auditory cortex, but how voice features are encoded remains a central unanswered question. Using human intracerebral recordings of auditory cortex, this study provides evidence for categorical encoding of voice.  相似文献   

6.
The zebra finch learns his song by memorizing a tutor's vocalization and then using auditory feedback to match his current vocalization to this memory, or template. The neural song system of adult and young birds responds to auditory stimuli, and exhibits selective tuning to the bird's own song (BOS). We have directly examined the development of neural tuning in the song motor system. We measured song system responses to vocalizations produced at various ages during sleep. We now report that the auditory response of the song motor system and motor output are linked early in song development. During sleep, playback of the current BOS induced a response in the song nucleus HVC during the song practice period, even when the song consisted of little more than repeated begging calls. Halfway through the sensorimotor period when the song was not yet in its final form, the response to BOS already exceeded that to all other auditory stimuli tested. Moreover, responses to previous, plastic versions of BOS decayed over time. This indicates that selective tuning to BOS mirrors the vocalization that the bird is currently producing.  相似文献   

7.
Spectro-temporal properties of auditory cortex neurons have been extensively studied with artificial sounds but it is still unclear whether they help in understanding neuronal responses to communication sounds. Here, we directly compared spectro-temporal receptive fields (STRFs) obtained from the same neurons using both artificial stimuli (dynamic moving ripples, DMRs) and natural stimuli (conspecific vocalizations) that were matched in terms of spectral content, average power and modulation spectrum. On a population of auditory cortex neurons exhibiting reliable tuning curves when tested with pure tones, significant STRFs were obtained for 62% of the cells with vocalizations and 68% with DMR. However, for many cells with significant vocalization-derived STRFs (STRFvoc) and DMR-derived STRFs (STRFdmr), the BF, latency, bandwidth and global STRFs shape differed more than what would be predicted by spiking responses simulated by a linear model based on a non-homogenous Poisson process. Moreover STRFvoc predicted neural responses to vocalizations more accurately than STRFdmr predicted neural response to DMRs, despite similar spike-timing reliability for both sets of stimuli. Cortical bursts, which potentially introduce nonlinearities in evoked responses, did not explain the differences between STRFvoc and STRFdmr. Altogether, these results suggest that the nonlinearity of auditory cortical responses makes it difficult to predict responses to communication sounds from STRFs computed from artificial stimuli.  相似文献   

8.
It is presently unknown whether our response to affective vocalizations is specific to those generated by humans or more universal, triggered by emotionally matched vocalizations generated by other species. Here, we used functional magnetic resonance imaging in normal participants to measure cerebral activity during auditory stimulation with affectively valenced animal vocalizations, some familiar (cats) and others not (rhesus monkeys). Positively versus negatively valenced vocalizations from cats and monkeys elicited different cerebral responses despite the participants' inability to differentiate the valence of these animal vocalizations by overt behavioural responses. Moreover, the comparison with human non-speech affective vocalizations revealed a common response to the valence in orbitofrontal cortex, a key component on the limbic system. These findings suggest that the neural mechanisms involved in processing human affective vocalizations may be recruited by heterospecific affective vocalizations at an unconscious level, supporting claims of shared emotional systems across species.  相似文献   

9.
Tschida KA  Mooney R 《Neuron》2012,73(5):1028-1039
Hearing loss prevents vocal learning and causes learned vocalizations to deteriorate, but how vocalization-related auditory feedback acts on neural circuits that control vocalization remains poorly understood. We deafened adult zebra finches, which rely on auditory feedback to maintain their learned songs, to test the hypothesis that deafening modifies synapses on neurons in a sensorimotor nucleus important to song production. Longitudinal in vivo imaging revealed that deafening selectively decreased the size and stability of dendritic spines on neurons that provide input to a striatothalamic pathway important to audition-dependent vocal plasticity, and changes in spine size preceded and predicted subsequent vocal degradation. Moreover, electrophysiological recordings from these neurons showed that structural changes were accompanied by functional weakening of both excitatory and inhibitory synapses, increased intrinsic excitability, and changes in spontaneous action potential output. These findings shed light on where and how auditory feedback acts within sensorimotor circuits to shape learned vocalizations.  相似文献   

10.
The goal of this study was to determine if auditory cues are important in maternal recognition by domestic cattle calves, Bos taurus. Cows and their calves were separated and the vocalizations of the mothers were recorded. During experimental playbacks in a test enclosure, each calf (n = 9) was given a choice between a tape-recorded vocalization of its mother and that of a strange mother. Calves significantly preferred their own mother's vocalization as compared to the vocalization of the unfamiliar mother. Calves spent significantly more time near the speaker that played their own mother's call, and approached significantly closer to their mother's speaker. These results demonstrate that 3–5-wk-old calves can recognize their mothers by auditory cues alone. Visual inspection of audiospectrograms of the cows' vocalizations suggests that there are individual differences among cows.  相似文献   

11.
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.  相似文献   

12.
Our study examined whether vocalizations of domestic pigs Sus scrofa domestica provide reliable cues for particular endocrine stress responses. To induce stress responses, we separated subjects individually from groupmates (SEP) and controlled potential effects of motor activity by a second treatment in which subjects were also immobilized (SEP + 1M). We analysed blood samples taken at short intervals via an indwelling catheter for titres of stress hormones to estimate endocrine stress responses. To identify behavioural responses we analysed recordings of vocalizations and non-vocal activities. Data evaluation yielded the following results. Multi-parametric sound analysis enabled us to distinguish four categories of vocalizations within treatments. Increasing rates of ‘squeal-grunts’ indicated increasing plasma levels of adrenaline, whereas the rates of ‘grunts’ decreased when the levels of cortisol increased. Acoustic parameters within the vocal categories distinguished did not correlate consistently with levels of any of the measured stress hormones; thus, our results show that peripheral endocrine stress responses are accompanied by changing rates of specific types of vocalizations. These relationships remained consistent, even when subjects' motor activity was restricted. Our results suggest possible effects of central stress reactions on both the control of vocalization and the activation of endocrine stress responses.  相似文献   

13.
Single-pulse magnetic coil stimulation (Cadwell MES 10) over the cranium induces without pain an electric pulse in the underlying cerebral cortex. Stimulation over the motor cortex can elicit a muscle twitch. In 10 subjects, we tested whether motor cortical stimulation could also elicit skin sympathetic nerve activity (SSNA; n = 8) and muscle sympathetic nerve activity (MSNA; n = 5) in the peroneal nerve. Focal motor cortical stimulation predictably elicited bursts of SSNA but not MSNA; with successive stimuli, the SSNA responses did not readily extinguish (94% of discharges to the motor cortex evoked SSNA responses) and had predictable latencies [739 +/- 33 (SE) to 895 +/- 13 ms]. The SSNA responses were similar after stimulation of dominant and nondominant sides. Focal stimulation posterior to the motor cortex elicited extinguishable SSNA responses. In three of six subjects, anterior cortical stimulation evoked SSNA responses similar to those seen with motor cortex stimulation but without detectable movement; in the other subjects, anterior stimulation evoked less SSNA discharge than that seen with motor cortex stimulation. Contrasting with motor cortical stimulation, evoked SSNA responses were more readily extinguished with 1) peripheral stimulation that directly elicited forearm muscle activation accompanied by electromyograms similar to those with motor cortical stimulation; 2) auditory stimulation by the click of the energized coil when off the head; and 3) in preliminary experiments, finger afferent stimulation sufficient to cause tingling. Our findings are consistent with the hypothesis that motor cortex stimulation can cause activation of both alpha-motoneurons and SSNA.  相似文献   

14.
Mice are of paramount importance in biomedical research and their vocalizations are a subject of interest for researchers across a wide range of health-related disciplines due to their increasingly important value as a phenotyping tool in models of neural, speech and language disorders. However, the mechanisms underlying the auditory processing of vocalizations in mice are not well understood. The mouse audiogram shows a peak in sensitivity at frequencies between 15-25 kHz, but weaker sensitivity for the higher ultrasonic frequencies at which they typically vocalize. To investigate the auditory processing of vocalizations in mice, we measured evoked potential, single-unit, and multi-unit responses to tones and vocalizations at three different stages along the auditory pathway: the auditory nerve and the cochlear nucleus in the periphery, and the inferior colliculus in the midbrain. Auditory brainstem response measurements suggested stronger responses in the midbrain relative to the periphery for frequencies higher than 32 kHz. This result was confirmed by single- and multi-unit recordings showing that high ultrasonic frequency tones and vocalizations elicited responses from only a small fraction of cells in the periphery, while a much larger fraction of cells responded in the inferior colliculus. These results suggest that the processing of communication calls in mice is supported by a specialization of the auditory system for high frequencies that emerges at central stations of the auditory pathway.  相似文献   

15.
Like humans, songbirds are one of the few animal groups that learn vocalization. Vocal learning requires coordination of auditory input and vocal output using auditory feedback to guide one’s own vocalizations during a specific developmental stage known as the critical period. Songbirds are good animal models for understand the neural basis of vocal learning, a complex form of imitation, because they have many parallels to humans with regard to the features of vocal behavior and neural circuits dedicated to vocal learning. In this review, we will summarize the behavioral, neural, and genetic traits of birdsong. We will also discuss how studies of birdsong can help us understand how the development of neural circuits for vocal learning and production is driven by sensory input (auditory information) and motor output (vocalization).  相似文献   

16.
Plasticity studies suggest that behavioral relevance can change the cortical processing of trained or conditioned sensory stimuli. However, whether this occurs in the context of natural communication, where stimulus significance is acquired through social interaction, has not been well investigated, perhaps because neural responses to species-specific vocalizations can be difficult to interpret within a systematic framework. The ultrasonic communication system between isolated mouse pups and adult females that either do or do not recognize the calls' significance provides an opportunity to explore this issue. We applied an information-based analysis to multi- and single unit data collected from anesthetized mothers and pup-naïve females to quantify how the communicative significance of pup calls affects their encoding in the auditory cortex. The timing and magnitude of information that cortical responses convey (at a 2-ms resolution) for pup call detection and discrimination was significantly improved in mothers compared to naïve females, most likely because of changes in call frequency encoding. This was not the case for a non-natural sound ensemble outside the mouse vocalization repertoire. The results demonstrate that a sensory cortical change in the timing code for communication sounds is correlated with the vocalizations' behavioral relevance, potentially enhancing functional processing by improving its signal to noise ratio.  相似文献   

17.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

18.
The effects of rapid eye movement sleep restriction (REMSR) in rats during late pregnancy were studied on the ultrasonic vocalizations (USVs) made by the pups. USVs are distress calls inaudible to human ears. Rapid eye movement (REM) sleep was restricted in one group of pregnant rats for 22 hours, starting from gestational day 14 to 20, using standard single platform method. The USVs of male pups were recorded after a brief isolation from their mother for two minutes on alternate post-natal days, from day one till weaning. The USVs were recorded using microphones and were analysed qualitatively and quantitatively using SASPro software. Control pups produced maximum vocalization on post-natal days 9 to 11. In comparison, the pups born to REMSR mothers showed not only a reduction in vocalization but also a delay in peak call making days. The experimental group showed variations in the types and characteristics of call types, and alteration in temporal profile. The blunting of distress call making response in these pups indicates that maternal sleep plays a role in regulating the neural development involved in vocalizations and possibly in shaping the emotional behaviour in neonates. It is suggested that the reduced ultrasonic vocalizations can be utilized as a reliable early marker for affective state in rat pups. Such impaired vocalization responses could provide an important lead in understanding mother-child bonding for an optimal cognitive development during post-partum life. This is the first report showing a potential link between maternal REM sleep deprivation and the vocalization in neonates and infants.  相似文献   

19.
Species-specific vocalizations in mice have frequency-modulated (FM) components slower than the lower limit of FM direction selectivity in the core region of the mouse auditory cortex. To identify cortical areas selective to slow frequency modulation, we investigated tonal responses in the mouse auditory cortex using transcranial flavoprotein fluorescence imaging. For differentiating responses to frequency modulation from those to stimuli at constant frequencies, we focused on transient fluorescence changes after direction reversal of temporally repeated and superimposed FM sweeps. We found that the ultrasonic field (UF) in the belt cortical region selectively responded to the direction reversal. The dorsoposterior field (DP) also responded weakly to the reversal. Regarding the responses in UF, no apparent tonotopic map was found, and the right UF responses were significantly larger in amplitude than the left UF responses. The half-max latency in responses to FM sweeps was shorter in UF compared with that in the primary auditory cortex (A1) or anterior auditory field (AAF). Tracer injection experiments in the functionally identified UF and DP confirmed that these two areas receive afferent inputs from the dorsal part of the medial geniculate nucleus (MG). Calcium imaging of UF neurons stained with fura-2 were performed using a two-photon microscope, and the presence of UF neurons that were selective to both direction and direction reversal of slow frequency modulation was demonstrated. These results strongly suggest a role for UF, and possibly DP, as cortical areas specialized for processing slow frequency modulation in mice.  相似文献   

20.
For many species, the presence of a significant social partner can lessen the behavioral and physiological responses to stressful stimuli. This study examined whether a single, individually specific, signature vocalization (phee call) could attenuate the physiological stress response that is induced in marmosets by housing them in short-term social isolation. Utilizing a repeated-measures design, adult marmosets (n=10) were temporarily isolated from their long-term pair mate and exposed to three conditions: signature vocalizations from the pair mate, phee calls from an unfamiliar opposite sex individual, or no auditory stimuli. Levels of urinary cortisol were monitored as a physiological indicator of the stress response. Urinary cortisol levels were also monitored, while subjects remained undisturbed in their home cages to provide baseline levels. Temporarily isolated marmosets showed significantly higher levels of urinary cortisol than undisturbed marmosets. However, the nature of the acoustic stimulus experienced during isolation led to differences in the excretion of urinary cortisol. Isolated marmosets exposed to a familiar pair mate's vocalization showed significantly lower levels of urinary cortisol than when exposed to unfamiliar marmoset vocalizations (P <0.04) or to no auditory stimuli (P <0.03). Neither the duration of pairing nor the quality of relationship in the pair (indexed by spatial proximity scores) predicted the magnitude of reduction in cortisol in the familiar vocalization condition. The results presented here provide the first evidence that a single, individually specific communication signal can decrease the magnitude of a physiological stress response in a manner analogous to the physical presence of a social partner, a process we term "vocal buffering."  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号