首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory–vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory–vocal mirror neurons in a sensorimotor region of the songbird''s brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory–vocal mirroring in the songbird''s brain.  相似文献   

2.
Songbirds are one of the few vertebrate groups (including humans) that evolved the ability to learn vocalizations. During song learning, social interactions with adult models are crucial and young songbirds raised without direct contacts with adults typically produce abnormal songs showing phonological and syntactical deficits. This raises the question of what functional representation of their vocalizations such deprived animals develop. Here we show that young starlings that we raised without any direct contact with adults not only failed to differentiate starlings' typical song classes in their vocalizations but also failed to develop differential neural responses to these songs. These deficits appear to be linked to a failure to acquire songs' functions and may provide a model for abnormal development of communicative skills, including speech.  相似文献   

3.
Songbirds rely on auditory processing of natural communication signals for a number of social behaviors,including mate selection,individual recognition and the rare behavior of vocal learning - the ability to learn vocalizations through imitation of an adult model,rather than by instinct.Like mammals,songbirds possess a set of interconnected ascending and descending auditory brain pathways that process acoustic information and that are presumably involved in the perceptual processing of vocal communication signals.Most auditory areas studied to date are located in the caudomedial forebrain of the songbird and include the thalamo-recipient field L (sub fields L1,L2 and L3),the caudomedial and caudolateral mesopallium (CMM and CLM,respectively) and the caudomedial nidopallium (NCM). This review focuses on NCM,an auditory area previously proposed to be analogous to parts of the primary auditory cortex in mammals.Stimulation of songbirds with auditory stimuli drives vigorous electrophysiological responses and the expression of several activity-regulated genes in NCM.Interestingly,NCM neurons are tuned to species-specific songs and undergo some forms of experience-dependent plasticity in-vivo .These activity-dependent changes may underlie long-term modifications in the functional performance of NCM and constitute a potential neural substrate for auditory discrimination.We end this review by discussing evidence that suggests that NCM may be a site of auditory memory formation and/or storage.  相似文献   

4.
Zorović M 《PloS one》2011,6(10):e26843
During mating, males and females of N. viridula (Heteroptera: Pentatomidae) produce sex- and species-specific calling and courtship substrate-borne vibratory signals, grouped into songs. Recognition and localization of these signals are fundamental for successful mating. The recognition is mainly based on the temporal pattern, i.e. the amplitude modulation, while the frequency spectrum of the signals usually only plays a minor role. We examined the temporal selectivity for vibratory signals in four types of ascending vibratory interneurons in N. viridula. Using intracellular recording and labelling technique, we analyzed the neurons' responses to 30 pulse duration/interval duration (PD/ID) combinations. Two response arrays were created for each neuron type, showing the intensity of the responses either as time-averaged spike counts or as peak instantaneous spike rates. The mean spike rate response arrays showed preference of the neurons for short PDs (below 600 ms) and no selectivity towards interval duration; while the peak spike rate response arrays exhibited either short PD/long ID selectivity or no selectivity at all. The long PD/short ID combinations elicited the weakest responses in all neurons tested. No response arrays showed the receiver preference for either constant period or duty cycle. The vibratory song pattern selectivity matched the PD of N. viridula male vibratory signals, thus pointing to temporal filtering for the conspecific vibratory signals already at level of the ascending interneurons. In some neurons the responses elicited by the vibratory stimuli were followed by distinct, regular oscillations of the membrane potential. The distance between the oscillation peaks matched the temporal structure of the male calling song, indicating a possible resonance based mechanism for signal recognition.  相似文献   

5.
The neuronal system underlying learning, generation and recognition of song in birds is one of the best-studied systems in the neurosciences. Here, we use these experimental findings to derive a neurobiologically plausible, dynamic, hierarchical model of birdsong generation and transform it into a functional model of birdsong recognition. The generation model consists of neuronal rate models and includes critical anatomical components like the premotor song-control nucleus HVC (proper name), the premotor nucleus RA (robust nucleus of the arcopallium), and a model of the syringeal and respiratory organs. We use Bayesian inference of this dynamical system to derive a possible mechanism for how birds can efficiently and robustly recognize the songs of their conspecifics in an online fashion. Our results indicate that the specific way birdsong is generated enables a listening bird to robustly and rapidly perceive embedded information at multiple time scales of a song. The resulting mechanism can be useful for investigating the functional roles of auditory recognition areas and providing predictions for future birdsong experiments.  相似文献   

6.
Acoustic signals during intrasexual interactions may help receivers to establish the cost and benefits of engaging in a confrontation versus avoiding the cost of escalation. Although birdsong repertoires have been previously suggested as providing information during agonistic encounters, the cost (time/neural resources) of assessing large repertoires may decrease the efficiency of the signal for mutual assessment. Acoustic-structural features may, therefore, be used to enable a fast and accurate assessment during this kind of encounters. Recently, it has been suggested that the consistency of songs may play a key role during intrasexual interactions in bird species. Using a playback experiment in a colour-ringed great tit population, we tested the hypothesis that songs differing in consistency may elicit a differential response, indicating that the signal is salient for the receivers. Great tit males clearly responded more aggressively towards highly consistent songs. Our findings, together with previous evidence of increased song consistency with age in the great tit, suggest that song consistency provides information on experience or dominance in this species, and this phenomenon may be more widespread than currently acknowledged.  相似文献   

7.
The processing of species-specific communication signals in the auditory system represents an important aspect of animal behavior and is crucial for its social interactions, reproduction, and survival. In this article the neuronal mechanisms underlying the processing of communication signals in the higher centers of the auditory system--inferior colliculus (IC), medial geniculate body (MGB) and auditory cortex (AC)--are reviewed, with particular attention to the guinea pig. The selectivity of neuronal responses for individual calls in these auditory centers in the guinea pig is usually low--most neurons respond to calls as well as to artificial sounds; the coding of complex sounds in the central auditory nuclei is apparently based on the representation of temporal and spectral features of acoustical stimuli in neural networks. Neuronal response patterns in the IC reliably match the sound envelope for calls characterized by one or more short impulses, but do not exactly fit the envelope for long calls. Also, the main spectral peaks are represented by neuronal firing rates in the IC. In comparison to the IC, response patterns in the MGB and AC demonstrate a less precise representation of the sound envelope, especially in the case of longer calls. The spectral representation is worse in the case of low-frequency calls, but not in the case of broad-band calls. The emotional content of the call may influence neuronal responses in the auditory pathway, which can be demonstrated by stimulation with time-reversed calls or by measurements performed under different levels of anesthesia. The investigation of the principles of the neural coding of species-specific vocalizations offers some keys for understanding the neural mechanisms underlying human speech perception.  相似文献   

8.
Considerable knowledge is available on the neural substrates for speech and language from brain-imaging studies in humans, but until recently there was a lack of data for comparison from other animal species on the evolutionarily conserved brain regions that process species-specific communication signals. To obtain new insights into the relationship of the substrates for communication in primates, we compared the results from several neuroimaging studies in humans with those that have recently been obtained from macaque monkeys and chimpanzees. The recent work in humans challenges the longstanding notion of highly localized speech areas. As a result, the brain regions that have been identified in humans for speech and nonlinguistic voice processing show a striking general correspondence to how the brains of other primates analyze species-specific vocalizations or information in the voice, such as voice identity. The comparative neuroimaging work has begun to clarify evolutionary relationships in brain function, supporting the notion that the brain regions that process communication signals in the human brain arose from a precursor network of regions that is present in nonhuman primates and is used for processing species-specific vocalizations. We conclude by considering how the stage now seems to be set for comparative neurobiology to characterize the ancestral state of the network that evolved in humans to support language.  相似文献   

9.
Learned birdsong is a widely used animal model for understanding the acquisition of human speech. Male songbirds often learn songs from adult males during sensitive periods early in life, and sing to attract mates and defend territories. In presumably all of the 350+ parrot species, individuals of both sexes commonly learn vocal signals throughout life to satisfy a wide variety of social functions. Despite intriguing parallels with humans, there have been no experimental studies demonstrating learned vocal production in wild parrots. We studied contact call learning in video-rigged nests of a well-known marked population of green-rumped parrotlets (Forpus passerinus) in Venezuela. Both sexes of naive nestlings developed individually unique contact calls in the nest, and we demonstrate experimentally that signature attributes are learned from both primary care-givers. This represents the first experimental evidence for the mechanisms underlying the transmission of a socially acquired trait in a wild parrot population.  相似文献   

10.
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.  相似文献   

11.
In studies of birdsong learning, imitation-based assays of stimulus memorization do not take into account that tutored song types may have been stored, but were not retrieved from memory. Such a 'silent' reservoir of song material could be used later in the bird's life, e.g. during vocal interactions. We examined this possibility in hand-reared nightingales during their second year. The males had been exposed to songs, both as fledglings and later, during their first full song period in an interactive playback design. Our design allowed us to compare the performance of imitations from the following categories: (i) songs only experienced during the early tutoring; (ii) songs experienced both during early tutoring and interactive playbacks; and (iii) novel songs experienced only during the simulated interactions. In their second year, birds imitated song types from each category, including those from categories (i) and (ii) which they had failed to imitate before. In addition, the performance of these song types was different (category (ii) > category (i)) and more pronounced than for category (iii) songs. Our results demonstrate 'silent' song storage in nightingales and point to a graded influence of the time and the social context of experience on subsequent vocal imitation.  相似文献   

12.
ABSTRACT

“Pure tones” are a distinctive acoustic feature of many birdsongs. Recent research on songbird vocal physiology suggests that such tonal sounds result from a coordinated interaction between the syrinx and a vocal filter, as demonstrated by the emergence of harmonic overtones when a bird sings in helium. To investigate the communicative significance of vocal tract filtration in the production of birdsong, we used field playback experiments to compare the responses of male swamp sparrows Melospiza georgiana to normal songs and those same songs recorded in helium. We also measured responses to pure tone songs that had been shifted upward in frequency to match the average spectra of those songs with added harmonics. Male sparrows were significantly more responsive to the playback of normal songs than to either helium songs with added harmonics or frequency- shifted pure tone songs. Songs with harmonics retained a high degree of salience, however. We conclude that explanations for the occurrence of tonal sounds in birdsongs must consider perceptual attributes of songs as communicative signals, as well as problems of song production and transmission.  相似文献   

13.
While the neural circuitry and physiology of the auditory system is well studied among vertebrates, far less is known about how the auditory system interacts with other neural substrates to mediate behavioral responses to social acoustic signals. One species that has been the subject of intensive neuroethological investigation with regard to the production and perception of social acoustic signals is the plainfin midshipman fish, Porichthys notatus, in part because acoustic communication is essential to their reproductive behavior. Nesting male midshipman vocally court females by producing a long duration advertisement call. Females localize males by their advertisement call, spawn and deposit all their eggs in their mate’s nest. As multiple courting males establish nests in close proximity to one another, the perception of another male’s call may modulate individual calling behavior in competition for females. We tested the hypothesis that nesting males exposed to advertisement calls of other males would show elevated neural activity in auditory and vocal-acoustic brain centers as well as differential activation of catecholaminergic neurons compared to males exposed only to ambient noise. Experimental brains were then double labeled by immunofluorescence (-ir) for tyrosine hydroxylase (TH), an enzyme necessary for catecholamine synthesis, and cFos, an immediate-early gene product used as a marker for neural activation. Males exposed to other advertisement calls showed a significantly greater percentage of TH-ir cells colocalized with cFos-ir in the noradrenergic locus coeruleus and the dopaminergic periventricular posterior tuberculum, as well as increased numbers of cFos-ir neurons in several levels of the auditory and vocal-acoustic pathway. Increased activation of catecholaminergic neurons may serve to coordinate appropriate behavioral responses to male competitors. Additionally, these results implicate a role for specific catecholaminergic neuronal groups in auditory-driven social behavior in fishes, consistent with a conserved function in social acoustic behavior across vertebrates.  相似文献   

14.
Norepinephrine (NE) is thought to play important roles in the consolidation and retrieval of long-term memories, but its role in the processing and memorization of complex acoustic signals used for vocal communication has yet to be determined. We have used a combination of gene expression analysis, electrophysiological recordings and pharmacological manipulations in zebra finches to examine the role of noradrenergic transmission in the brain's response to birdsong, a learned vocal behavior that shares important features with human speech. We show that noradrenergic transmission is required for both the expression of activity-dependent genes and the long-term maintenance of stimulus-specific electrophysiological adaptation that are induced in central auditory neurons by stimulation with birdsong. Specifically, we show that the caudomedial nidopallium (NCM), an area directly involved in the auditory processing and memorization of birdsong, receives strong noradrenergic innervation. Song-responsive neurons in this area express α-adrenergic receptors and are in close proximity to noradrenergic terminals. We further show that local α-adrenergic antagonism interferes with song-induced gene expression, without affecting spontaneous or evoked electrophysiological activity, thus dissociating the molecular and electrophysiological responses to song. Moreover, α-adrenergic antagonism disrupts the maintenance but not the acquisition of the adapted physiological state. We suggest that the noradrenergic system regulates long-term changes in song-responsive neurons by modulating the gene expression response that is associated with the electrophysiological activation triggered by song. We also suggest that this mechanism may be an important contributor to long-term auditory memories of learned vocalizations.  相似文献   

15.
Geographic variation in birdsong and differential responses of territorial males to local and non‐local song variants have been documented in a number of songbird species in which males learn their songs through imitation. Here, we investigated geographic song variation and responses to local and non‐local song in the grasshopper sparrow (Ammodramus savannarum), a species in which males develop song by improvisation rather than imitation, as a first step toward understanding how the extent and salience of geographic song variation is related to the mode of song development. To describe the geographic variation in song, we compared songs from populations in eastern Maryland and central Ohio, USA, using multiple acoustic analysis techniques. We then conducted a playback experiment in Maryland using local and non‐local (Ohio) songs to test how territorial males responded to this geographic variation. We found acoustic differences between songs from the two sites. However, males responded similarly to playback of these songs, suggesting that this geographic variation is not behaviorally salient in a territorial context. Together with previous studies, our results suggest that across species, geographic song variation and the extent to which this variation functions in communication may be correlated with the accuracy with which song models are imitated during song development.  相似文献   

16.
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons'' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.  相似文献   

17.
18.
Humans can recognize spoken words with unmatched speed and accuracy. Hearing the initial portion of a word such as "formu…" is sufficient for the brain to identify "formula" from the thousands of other words that partially match. Two alternative computational accounts propose that partially matching words (1) inhibit each other until a single word is selected ("formula" inhibits "formal" by lexical competition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error is minimal after sequences like "formu…"). To distinguish these theories we taught participants novel words (e.g., "formubo") that sound like existing words ("formula") on two successive days. Computational simulations show that knowing "formubo" increases lexical competition when hearing "formu…", but reduces segment prediction error. Conversely, when the sounds in "formula" and "formubo" diverge, the reverse is observed. The time course of magnetoencephalographic brain responses in the superior temporal gyrus (STG) is uniquely consistent with a segment prediction account. We propose a predictive coding model of spoken word recognition in which STG neurons represent the difference between predicted and heard speech sounds. This prediction error signal explains the efficiency of human word recognition and simulates neural responses in auditory regions.  相似文献   

19.
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task.  相似文献   

20.
Neurogenesis continues in the brain of adult birds. These cells are born in the ventricular zone of the lateral ventricles. Young neurons then migrate long distances guided, in part, by radial cell processes and become incorporated throughout most of the telencephalon. In songbirds, the high vocal center (HVC), which is important for the production of learned song, receives many of its neurons after hatching. HVC neurons which project to the robust nucleus of the archistriatum to form part of the efferent pathway for song production, and HVC interneurons continue to be added throughout life. In contrast, Area X-projecting HVC cells, thought to be part of a circuit necessary for song learning but not essential for adult song production, are only born in the embryo. New neurons in HVC of juvenile and adult birds replace older cells that die. There is a correlation between seasonal cell turnover rates (addition and loss) and testosterone levels in adult male canaries. Available evidence suggests that steroid hormones control the recruitment and/or survival of new HVC neurons, but not their production. The functions of neuronal replacement in adult birds remain unclear. However, rates of HVC neuron turnover are highest at times of year when canaries modify their songs. Replaceable HVC neurons may participate in the modification of perceptual memories or motor programs for song production. In contrast, permanent HVC neurons could hold long-lasting song-related information. The unexpected large-scale production of neurons in the adult brain holds important clues about brain function and, in particular, about the neural control of a learned behavior—birdsong. © 1997 John Wiley & Sons, Inc. J Neurobiol 33: 585–601, 1997  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号