首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 13 毫秒
1.
Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult “tutors”, and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.  相似文献   

2.
Auditory Gestalt perception by grouping of species-specific vocalizations to a perceptual stream with a defined meaning is typical for human speech perception but has not been studied in non-human mammals so far. Here we use synthesized models of vocalizations (series of wriggling calls) of mouse pups (Mus domesticus) and show that their mothers perceive the call series as a meaningful Gestalt for the release of instinctive maternal behavior, if the inter-call intervals have durations of 100–400 ms. Shorter or longer inter-call intervals significantly reduce the maternal responsiveness. We also show that series of natural wriggling calls have inter-call intervals mainly in the range of 100–400 ms. Thus, series of natural wriggling calls of pups match the time-domain auditory filters of their mothers in order to be optimally perceived and recognized. A similar time window exists for the production of human speech and the perception of series of sounds by humans. Neural mechanisms for setting the boundaries of the time window are discussed.  相似文献   

3.
We carried out a comparative study of spectral-prosodic characteristics of bird vocalization and human speech. Comparison was made between the relative characteristics of the fundamental frequency and spectral maxima. Criteria were formulated for the comparison of bird's signals and human speech. A certain correspondence was found between the vocal structures of birds and humans. It was proposed that in the course of evolution, man adopted the main structural principles of his acoustic signalling from birds.  相似文献   

4.
Peter Marler made a number of significant contributions to the field of ethology, particularly in the area of animal communication. His research on birdsong learning gave rise to a thriving subfield. An important tenet of this growing subfield is that parallels between birdsong and human speech make songbirds valuable as models in comparative and translational research, particularly in the case of vocal learning and development. Decades ago, Marler pointed out several phenomena common to the processes of vocal development in songbirds and humans—including a dependence on early acoustic experience, sensitive periods, predispositions, auditory feedback, intrinsic reinforcement, and a progression through distinct developmental stages—and he advocated for the value of comparative study in this domain. We review Marler's original comparisons between birdsong and speech ontogeny and summarize subsequent progress in research into these and other parallels. We also revisit Marler's arguments in support of the comparative study of vocal development in the context of its widely recognized value today.  相似文献   

5.
Language is a uniquely human trait, and questions of how and why it evolved have been intriguing scientists for years. Nonhuman primates (primates) are our closest living relatives, and their behavior can be used to estimate the capacities of our extinct ancestors. As humans and many primate species rely on vocalizations as their primary mode of communication, the vocal behavior of primates has been an obvious target for studies investigating the evolutionary roots of human speech and language. By studying the similarities and differences between human and primate vocalizations, comparative research has the potential to clarify the evolutionary processes that shaped human speech and language. This review examines some of the seminal and recent studies that contribute to our knowledge regarding the link between primate calls and human language and speech. We focus on three main aspects of primate vocal behavior: functional reference, call combinations, and vocal learning. Studies in these areas indicate that despite important differences, primate vocal communication exhibits some key features characterizing human language. They also indicate, however, that some critical aspects of speech, such as vocal plasticity, are not shared with our primate cousins. We conclude that comparative research on primate vocal behavior is a very promising tool for deepening our understanding of the evolution of human speech and language, but much is still to be done as many aspects of monkey and ape vocalizations remain largely unexplored.  相似文献   

6.
Although vocal communication is wide-spread in animal kingdom, the use of learned (in contrast to innate) vocalization is very rare. We can find it only in few animal taxa: human, bats, whales and dolphins, elephants, parrots, hummingbirds, and songbirds. There are several parallels between human and songbird perception and production of vocal signals. Hence, many studies take interest in songbird singing for investigating the neural bases of learning and memory. Brain circuits controlling song learning and maintenance consist of two pathways — a vocal motor pathway responsible for production of learned vocalizations and anterior forebrain pathway responsible for learning and modifying the vocalizations. This review provides an overview of the song organization, its behavioural traits, and neural regulations. The recently expanding area of molecular mapping of the behaviour-driven gene expression in brain represents one of the modern approaches to the study the function of vocal and auditory areas for song learning and maintenance in birds.  相似文献   

7.
Vocal imitation in human infants and in some orders of birds relies on auditory-guided motor learning during a sensitive period of development. It proceeds from 'babbling' (in humans) and 'subsong' (in birds) through distinct phases towards the full-fledged communication system. Language development and birdsong learning have parallels at the behavioural, neural and genetic levels. Different orders of birds have evolved networks of brain regions for song learning and production that have a surprisingly similar gross anatomy, with analogies to human cortical regions and basal ganglia. Comparisons between different songbird species and humans point towards both general and species-specific principles of vocal learning and have identified common neural and molecular substrates, including the forkhead box P2 (FOXP2) gene.  相似文献   

8.
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race" model failed to account for their behavior patterns. Conversely, a "superposition model", positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.  相似文献   

9.
Species-specific vocalizations fall into two broad categories: those that emerge during maturation, independent of experience, and those that depend on early life interactions with conspecifics. Human language and the communication systems of a small number of other species, including songbirds, fall into this latter class of vocal learning. Self-monitoring has been assumed to play an important role in the vocal learning of speech and studies demonstrate that perception of your own voice is crucial for both the development and lifelong maintenance of vocalizations in humans and songbirds. Experimental modifications of auditory feedback can also change vocalizations in both humans and songbirds. However, with the exception of large manipulations of timing, no study to date has ever directly examined the use of auditory feedback in speech production under the age of 4. Here we use a real-time formant perturbation task to compare the response of toddlers, children, and adults to altered feedback. Children and adults reacted to this manipulation by changing their vowels in a direction opposite to the perturbation. Surprisingly, toddlers' speech didn't change in response to altered feedback, suggesting that long-held assumptions regarding the role of self-perception in articulatory development need to be reconsidered.  相似文献   

10.
Considerable knowledge is available on the neural substrates for speech and language from brain-imaging studies in humans, but until recently there was a lack of data for comparison from other animal species on the evolutionarily conserved brain regions that process species-specific communication signals. To obtain new insights into the relationship of the substrates for communication in primates, we compared the results from several neuroimaging studies in humans with those that have recently been obtained from macaque monkeys and chimpanzees. The recent work in humans challenges the longstanding notion of highly localized speech areas. As a result, the brain regions that have been identified in humans for speech and nonlinguistic voice processing show a striking general correspondence to how the brains of other primates analyze species-specific vocalizations or information in the voice, such as voice identity. The comparative neuroimaging work has begun to clarify evolutionary relationships in brain function, supporting the notion that the brain regions that process communication signals in the human brain arose from a precursor network of regions that is present in nonhuman primates and is used for processing species-specific vocalizations. We conclude by considering how the stage now seems to be set for comparative neurobiology to characterize the ancestral state of the network that evolved in humans to support language.  相似文献   

11.
We conducted a comparative study of the peripheral auditory system in six avian species (downy woodpeckers, Carolina chickadees, tufted titmice, white-breasted nuthatches, house sparrows, and European starlings). These species differ in the complexity and frequency characteristics of their vocal repertoires. Physiological measures of hearing were collected on anesthetized birds using the auditory brainstem response to broadband click stimuli. If auditory brainstem response patterns are phylogenetically conserved, we predicted woodpeckers, sparrows, and starlings to be outliers relative to the other species, because woodpeckers are in a different Order (Piciformes) and, within the Order Passeriformes, sparrows and starlings are in different Superfamilies than the nuthatches, chickadees, and titmice. However, nuthatches and woodpeckers have the simplest vocal repertoires at the lowest frequencies of these six species. If auditory brainstem responses correlate with vocal complexity, therefore, we would predict nuthatches and woodpeckers to be outliers relative to the other four species. Our results indicate that auditory brainstem responses measures in the spring broadly correlated with both vocal complexity and, in some cases, phylogeny. However, these auditory brainstem response patterns shift from spring to winter due to species-specific seasonal changes. These seasonal changes suggest plasticity at the auditory periphery in adult birds.  相似文献   

12.
Formants are important phonetic elements of human speech that are also used by humans and non-human mammals to assess the body size of potential mates and rivals. As a consequence, it has been suggested that formant perception, which is crucial for speech perception, may have evolved through sexual selection. Somewhat surprisingly, though, no previous studies have examined whether sexes differ in their ability to use formants for size evaluation. Here, we investigated whether men and women differ in their ability to use the formant frequency spacing of synthetic vocal stimuli to make auditory size judgements over a wide range of fundamental frequencies (the main determinant of vocal pitch). Our results reveal that men are significantly better than women at comparing the apparent size of stimuli, and that lower pitch improves the ability of both men and women to perform these acoustic size judgements. These findings constitute the first demonstration of a sex difference in formant perception, and lend support to the idea that acoustic size normalization, a crucial prerequisite for speech perception, may have been sexually selected through male competition. We also provide the first evidence that vocalizations with relatively low pitch improve the perception of size-related formant information.  相似文献   

13.
Although formants (vocal tract resonances) can often be observed in avian vocalizations, and several bird species have been shown to perceive formants in human speech sounds, no studies have examined formant perception in birds' own species-specific calls. We used playbacks of computer-synthesized crane calls in a modified habituation—dishabituation paradigm to test for formant perception in whooping cranes ( Grus americana ). After habituating birds to recordings of natural contact calls, we played a synthesized replica of one of the habituating stimuli as a control to ensure that the synthesizer worked adequately; birds dishabituated in only one of 13 cases. Then, we played the same call with its formant frequencies shifted. The birds dishabituated to the formant-shifted calls in 10 out of 12 playbacks. These data suggest that cranes perceive and attend to changes in formant frequencies in their own species-specific vocalizations, and are consistent with the hypothesis that formants can provide acoustic cues to individuality and body size.  相似文献   

14.
范艳珠  方光战 《动物学杂志》2016,51(6):1118-1128
声音通讯包含鸣声的产生、传播及对鸣声的感知与行为响应。对大多数无尾两栖类而言,雄性个体间的竞争(即雄雄竞争)、雌性配偶识别与选择几乎完全依赖声音通讯,因此准确及时的声音信息传递与接收对蛙类的生存和繁殖起着决定性作用。本文总结了蛙类鸣叫特征及其产生机制,归纳了声音通讯在蛙类性选择中的功能及协同进化,探讨了鸣声感知的神经机制及声音通讯的内分泌机制。最后对蛙类声音通讯研究方向进行了展望,并提出了可能的解决方案。  相似文献   

15.
Our ability to perceive person identity from other human voices has been described as prodigious. However, emerging evidence points to limitations in this skill. In this study, we investigated the recent and striking finding that identity perception from spontaneous laughter - a frequently occurring and important social signal in human vocal communication - is significantly impaired relative to identity perception from volitional (acted) laughter. We report the findings of an experiment in which listeners made speaker discrimination judgements from pairs of volitional and spontaneous laughter samples. The experimental design employed a range of different conditions, designed to disentangle the effects of laughter production mode versus perceptual features on the extraction of speaker identity. We find that the major driving factor of reduced accuracy for spontaneous laughter is not its perceived emotional quality but rather its distinct production mode, which is phylogenetically homologous with other primates. These results suggest that identity-related information is less successfully encoded in spontaneously produced (laughter) vocalisations. We therefore propose that claims for a limitless human capacity to process identity-related information from voices may be linked to the evolution of volitional vocal control and the emergence of articulate speech.  相似文献   

16.
Among topics related to the evolution of language, the evolution of speech is particularly fascinating. Early theorists believed that it was the ability to produce articulate speech that set the stage for the evolution of the «special» speech processing abilities that exist in modern-day humans. Prior to the evolution of speech production, speech processing abilities were presumed not to exist. The data reviewed here support a different view. Two lines of evidence, one from young human infants and the other from infrahuman species, neither of whom can produce articulate speech, show that in the absence of speech production capabilities, the perception of speech sounds is robust and sophisticated. Human infants and non-human animals evidence auditory perceptual categories that conform to those defined by the phonetic categories of language. These findings suggest the possibility that in evolutionary history the ability to perceive rudimentary speech categories preceded the ability to produce articulate speech. This in turn suggests that it may be audition that structured, at least initially, the formation of phonetic categories.  相似文献   

17.
Behavioral coordination and synchrony contribute to a common biological mechanism that maintains communication, cooperation and bonding within many social species, such as primates and birds. Similarly, human language and social systems may also be attuned to coordination to facilitate communication and the formation of relationships. Gross similarities in movement patterns and convergence in the acoustic properties of speech have already been demonstrated between interacting individuals. In the present studies, we investigated how coordinated movements contribute to observers’ perception of affiliation (friends vs. strangers) between two conversing individuals. We used novel computational methods to quantify motor coordination and demonstrated that individuals familiar with each other coordinated their movements more frequently. Observers used coordination to judge affiliation between conversing pairs but only when the perceptual stimuli were restricted to head and face regions. These results suggest that observed movement coordination in humans might contribute to perceptual decisions based on availability of information to perceivers.  相似文献   

18.
The perception of vowels was studied in chimpanzees and humans, using a reaction time task in which reaction times for discrimination of vowels were taken as an index of similarity between vowels. Vowels used were five synthetic and natural Japanese vowels and eight natural French vowels. The chimpanzees required long reaction times for discrimination of synthetic [i] from [u] and [e] from [o], that is, they need long latencies for discrimination between vowels based on differences in frequency of the second formant. A similar tendency was observed for discrimination of natural [i] from [u]. The human subject required long reaction times for discrimination between vowels along the first formant axis. These differences can be explained by differences in auditory sensitivity between the two species and the motor theory of speech perception. A vowel, which is pronounced by different speakers, has different acoustic properties. However, humans can perceive these speech sounds as the same vowel. The phenomenon of perceptual constancy in speech perception was studied in chimpanzees using natural vowels and a synthetic [o]- [a] continuum. The chimpanzees ignored the difference in the sex of the speakers and showed a capacity for vocal tract normalization.  相似文献   

19.
The ethological approach has already provided rich insights into the auditory neurobiology of a number of different taxa (e.g. birds, frogs and insects). Understanding the ethology of primates is likely to yield similar insights into the specializations of this taxa's auditory system for processing species-specific vocalisations. Here, we review the recent advances made in our understanding of primate vocal perception and its neural basis.  相似文献   

20.
Vocal learning in songbirds and humans occurs by imitation of adult vocalizations. In both groups, vocal learning includes a perceptual phase during which juveniles birds and infants memorize adult vocalizations. Despite intensive research, the neural mechanisms supporting this auditory memory are still poorly understood. The present functional MRI study demonstrates that in adult zebra finches, the right auditory midbrain nucleus responds selectively to the copied vocalizations. The selective signal is distinct from selectivity for the bird''s own song and does not simply reflect acoustic differences between the stimuli. Furthermore, the amplitude of the selective signal is positively correlated with the strength of vocal learning, measured by the amount of song that experimental birds copied from the adult model. These results indicate that early sensory experience can generate a long-lasting memory trace in the auditory midbrain of songbirds that may support song learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号