首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human speech and bird vocalization are complex communicative behaviors with notable similarities in development and underlying mechanisms. However, there is an important difference between humans and birds in the way vocal complexity is generally produced. Human speech originates from independent modulatory actions of a sound source, e.g., the vibrating vocal folds, and an acoustic filter, formed by the resonances of the vocal tract (formants). Modulation in bird vocalization, in contrast, is thought to originate predominantly from the sound source, whereas the role of the resonance filter is only subsidiary in emphasizing the complex time-frequency patterns of the source (e.g., but see ). However, it has been suggested that, analogous to human speech production, tongue movements observed in parrot vocalizations modulate formant characteristics independently from the vocal source. As yet, direct evidence of such a causal relationship is lacking. In five Monk parakeets, Myiopsitta monachus, we replaced the vocal source, the syrinx, with a small speaker that generated a broad-band sound, and we measured the effects of tongue placement on the sound emitted from the beak. The results show that tongue movements cause significant frequency changes in two formants and cause amplitude changes in all four formants present between 0.5 and 10 kHz. We suggest that lingual articulation may thus in part explain the well-known ability of parrots to mimic human speech, and, even more intriguingly, may also underlie a speech-like formant system in natural parrot vocalizations.  相似文献   

2.
Radiotransmitted (RT) calls of monkeys equipped with laryngeal microtransmitters are compared with those recorded by an external microphone (AT). Sharp attenuation of background noise and echoes results in better sonograms with RT than AT sounds. Sensitive detection of unvoiced calls or phonatory noises leads to knowledge of the motivational state of the animals and the mechanisms of their vocal production. However, the laryngophone acts as a low pass filter which limits RT spectra below 3 kHz. Constant distance and orientation between sound source and microphone permit us to get absolute (low-pitched call) or relative (high-pitched call) intensity measurements. Their generalization should be possible with the use of a specific weighting filter which would reconstitute the original energy of calls. The system has interesting applications in behavioral and ecological studies.  相似文献   

3.
High background noise is an important obstacle in successful signal detection and perception of an intended acoustic signal. To overcome this problem, many animals modify their acoustic signal by increasing the repetition rate, duration, amplitude or frequency range of the signal. An alternative method to ensure successful signal reception, yet to be tested in animals, involves the use of two different types of signal, where one signal type may enhance the other in periods of high background noise. Humpback whale communication signals comprise two different types: vocal signals, and surface-generated signals such as ‘breaching’ or ‘pectoral slapping’. We found that humpback whales gradually switched from primarily vocal to primarily surface-generated communication in increasing wind speeds and background noise levels, though kept both signal types in their repertoire. Vocal signals have the advantage of having higher information content but may have the disadvantage of loosing this information in a noisy environment. Surface-generated sounds have energy distributed over a greater frequency range and may be less likely to become confused in periods of high wind-generated noise but have less information content when compared with vocal sounds. Therefore, surface-generated sounds may improve detection or enhance the perception of vocal signals in a noisy environment.  相似文献   

4.
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race" model failed to account for their behavior patterns. Conversely, a "superposition model", positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.  相似文献   

5.
Research into speech perception by nonhuman animals can be crucially informative in assessing whether specific perceptual phenomena in humans have evolved to decode speech, or reflect more general traits. Birds share with humans not only the capacity to use complex vocalizations for communication but also many characteristics of its underlying developmental and mechanistic processes; thus, birds are a particularly interesting group for comparative study. This review first discusses commonalities between birds and humans in perception of speech sounds. Several psychoacoustic studies have shown striking parallels in seemingly speech-specific perceptual phenomena, such as categorical perception of voice-onset-time variation, categorization of consonants that lack phonetic invariance, and compensation for coarticulation. Such findings are often regarded as evidence for the idea that the objects of human speech perception are auditory or acoustic events rather than articulations. Next, I highlight recent research on the production side of avian communication that has revealed the existence of vocal tract filtering and articulation in bird species-specific vocalization, which has traditionally been considered a hallmark of human speech production. Together, findings in birds show that many of characteristics of human speech perception are not uniquely human but also that a comparative approach to the question of what are the objects of perception--articulatory or auditory events--requires careful consideration of species-specific vocal production mechanisms.  相似文献   

6.
Francis CD  Ortega CP  Cruz A 《PloS one》2011,6(11):e27052

Background

Human-generated noise pollution now permeates natural habitats worldwide, presenting evolutionarily novel acoustic conditions unprecedented to most landscapes. These acoustics not only harm humans, but threaten wildlife, and especially birds, via changes to species densities, foraging behavior, reproductive success, and predator-prey interactions. Explanations for negative effects of noise on birds include disruption of acoustic communication through energetic masking, potentially forcing species that rely upon acoustic communication to abandon otherwise suitable areas. However, this hypothesis has not been adequately tested because confounding stimuli often co-vary with noise and are difficult to separate from noise exposure.

Methodology/Principal Findings

Using a natural experiment that controls for confounding stimuli, we evaluate whether species vocal features or urban-tolerance classifications explain their responses to noise measured through habitat use. Two data sets representing nesting and abundance responses reveal that noise filters bird communities nonrandomly. Signal duration and urban tolerance failed to explain species-specific responses, but birds with low-frequency signals that are more susceptible to masking from noise avoided noisy areas and birds with higher frequency vocalizations remained. Signal frequency was also negatively correlated with body mass, suggesting that larger birds may be more sensitive to noise due to the link between body size and vocal frequency.

Conclusions/Significance

Our findings suggest that acoustic masking by noise may be a strong selective force shaping the ecology of birds worldwide. Larger birds with lower frequency signals may be excluded from noisy areas, whereas smaller species persist via transmission of higher frequency signals. We discuss our findings as they relate to interspecific relationships among body size, vocal amplitude and frequency and suggest that they are immediately relevant to the global problem of increases in noise by providing critical insight as to which species traits influence tolerance of these novel acoustics.  相似文献   

7.
Structural variation in acoustic signals may be related either to the factors affecting sound production such as bird morphology, or to vocal adaptations to improve sound transmission in different environments. Thus, variation in acoustic signals can influence intraspecific communication processes. This will ultimately influence divergence in allopatric populations. The study of geographical variation in vocalizations of suboscines provides an opportunity to compare acoustic signals from different populations, without additional biases caused by song learning and cultural evolution typical of oscines. The aim of this study was to compare vocalizations of distinct populations of a suboscine species, the Thorn‐tailed Rayadito. Four types of vocalizations were recorded in five populations, including all three currently accepted subspecies. Comparisons of each type of vocalization among the five populations showed that some variation existed in the repetitive trill, whereas no differences were found among alarm calls and loud trills. Variation in repetitive trills among populations and forest types suggests that sound transmission is involved in vocal differences in suboscines. Acoustic differences are also consistent with distinguishing subspecies bullocki from spinicauda and fulva, but not the two latter subspecies from each other. Our results suggest that the geographical differentiation in vocalizations observed among Thorn‐tailed Rayadito populations is likely to be a consequence of different ecological pressures. Therefore, incipient genetic isolation of these populations is suggested, based on the innate origin of suboscine vocalizations.  相似文献   

8.
Humans excel at assessing conspecific emotional valence and intensity, based solely on non-verbal vocal bursts that are also common in other mammals. It is not known, however, whether human listeners rely on similar acoustic cues to assess emotional content in conspecific and heterospecific vocalizations, and which acoustical parameters affect their performance. Here, for the first time, we directly compared the emotional valence and intensity perception of dog and human non-verbal vocalizations. We revealed similar relationships between acoustic features and emotional valence and intensity ratings of human and dog vocalizations: those with shorter call lengths were rated as more positive, whereas those with a higher pitch were rated as more intense. Our findings demonstrate that humans rate conspecific emotional vocalizations along basic acoustic rules, and that they apply similar rules when processing dog vocal expressions. This suggests that humans may utilize similar mental mechanisms for recognizing human and heterospecific vocal emotions.  相似文献   

9.
Vocal indicators of welfare have proven their use for many farmed and zoo animals and may be applied to farmed silver foxes as these animals display high vocal activity toward humans. Farmed silver foxes were selected mainly for fur, size, and litter sizes, but not for attitudes to people, so they are fearful of humans and have short-term welfare problems in their proximity. With a human approach test, we designed here the steady increase and decrease of fox–human distance and registered vocal responses of 25 farmed silver foxes. We analyzed the features of vocalizations produced by the foxes at different fox–human distances, assuming that changes in vocal responses reflect the degrees of human-related discomfort. For revealing the discomfort-related vocal traits in farmed silver foxes, we proposed and tested the algorithm of “joint calls,” equally applicable for analysis of all calls independently on their structure, either tonal or noisy. We discuss that the increase in proportion of time spent vocalizing and the shift of call energy toward higher frequencies may be integral vocal characteristics of short-term welfare problems in farmed silver foxes and probably in other captive mammals.  相似文献   

10.
While vocal tract resonances or formants are key acoustic parameters that define differences between phonemes in human speech, little is known about their function in animal communication. Here, we used playback experiments to present red deer stags with re-synthesized vocalizations in which formant frequencies were systematically altered to simulate callers of different body sizes. In response to stimuli where lower formants indicated callers with longer vocal tracts, stags were more attentive, replied with more roars and extended their vocal tracts further in these replies. Our results indicate that mammals other than humans use formants in vital vocal exchanges and can adjust their own formant frequencies in relation to those that they hear.  相似文献   

11.
To advance knowledge of the vocal communication associated with close proximity social interactions in Garnett's greater bush baby (Otolemur garnettii), we measured acoustic and temporal properties of vocalizations from videotaped recordings of captives in two main social contexts: mother-infant interactions and adult male-female pair introductions and reintroductions. We used a real-time sonagraph or software program to display, edit, and analyze vocal waveforms, and to provide wideband and narrowband spectrograms. Vocalization characteristics measured include fundamental frequency (via inspection of harmonics) and spectral features such as formant frequency, intensity, and duration. The vocal repertoire contained 4 major types of vocalizations: 1) barks and complex multiple bark sequences, 2) low frequency flutter/hums and growls, 3) high frequency clicks and spits, and 4) noisy shrieks. We describe several vocalizations for the first time and provide a clear classification of some of them on the basis of call durations (long/short growls). Complex bark sequences, previously described as distant communication calls, were invariant and were not often emitted by individuals when in close proximity. When classified spectrographically, the remaining 3 call types, which occurred when individuals were in close proximity, were less stereotypical, and gradations within call types were apparent. Our results show that although nocturnal and non-gregarious, complex communicatory signals of bush babies constitute a vocal repertoire formerly thought to be characteristic only of diurnal, gregarious primates.  相似文献   

12.
Bats rely heavily on acoustic signals in order to communicate with each other in a variety of social contexts. Among those, agonistic interactions and accompanying vocalizations have received comparatively little study. Here, we studied the communicational behaviour between male greater mouse-eared bats (Myotis myotis) during agonistic encounters. Two randomly paired adult males were placed in a box that allowed us to record video and sound synchronously. We describe their vocal repertoire and compare the acoustic structure of vocalizations between two aggression levels, which we quantified via the bats’ behaviour. By inspecting thirty, one-minute long encounters, we identified a rich variety of social calls that can be described as two basic call types: echolocation-like, low-frequency sweeps and long, broadband squawks. Squawks, the most common vocalization, were often noisy, i.e. exhibited a chaotic spectral structure. We further provide evidence for individual signatures and the presence of nonlinear phenomena in this species’ vocal repertoire. As the usage and acoustic structure of vocalizations is known to encode the internal state of the caller, we had predicted that the spectral structure of squawks would be affected by the caller’s aggression level. Confirming our hypothesis, we found that increased aggression positively correlated with an increase in call frequency and tonality. We hypothesize that the extreme spectral variability between and within squawks can be explained by small fluctuations in vocal control parameters (e.g. subglottal pressure) that are caused by the elevated arousal, which is in turn influenced by the aggression level.  相似文献   

13.
Many animals defend territories against conspecific individuals using acoustic signals. In birds, male vocalizations are known to play a critical role in territory defence. Territorial acoustic signals in females have been poorly studied, perhaps because female song is uncommon in north‐temperate ecosystems. In this study, we compare male vs. female territorial singing behaviour in Neotropical rufous‐and‐white wrens Thryothorus rufalbus, a species where both sexes produce solo songs and often coordinate their songs in vocal duets. We recorded free‐living birds in Costa Rica using an eight‐microphone Acoustic Location System capable of passively triangulating the position of animals based on their vocalizations. We recorded 17 pairs of birds for 2–4 consecutive mornings and calculated the territory of each individual as a 95% fixed kernel estimate around their song posts. We compared territories calculated around male vs. female song posts, including separate analyses of solo vs. duet song posts. These spatial analyses of singing behaviour reveal that males and females use similarly sized territories with more than 60% overlap between breeding partners. Territories calculated based on solo vs. duet song posts were of similar size and similar degrees of overlap. Solos and duets were performed at similar distances from the nest for both sexes. Overall, male and female rufous‐and‐white wrens exhibit very similar spatial territorial singing behaviour, demonstrating congruent patterns of male and female territoriality.  相似文献   

14.
Undeniably, acoustic signals are the predominant mode of communication in frogs and toads. Acoustically active species are found throughout the vast diversity of anuran families. However, additional or alternative signal modalities have gained increasing attention. In several anurans, seismic, visual and chemical communications have convergently evolved due to ecological constraints such as noisy environments. The production of a visual cue, like the inevitably moving vocal sac of acoustically advertising males, is emphasized by conspicuously coloured throats. Limb movements accompanied by dynamic displays of bright colours are additional examples of striking visual signals independent of vocalizations. In some multimodal anuran communication systems, the acoustic component acts as an alert signal, which alters the receiver attention to the following visual display. Recent findings of colourful glands on vocal sacs, producing volatile species-specific scent bouquets suggest the possibility of integration of acoustic, visual and chemical cues in species recognition and mate choice. The combination of signal components facilitates a broadened display repertoire in challenging environmental conditions. Thus, the complexity of the communication systems of frogs and toads may have been underestimated.  相似文献   

15.
We studied the advertisement signals in two clades of North American hylid frogs in order to characterize the relationships between signal acoustic structure and underlying behavior. A mismatch was found between the acoustic structure and the mechanism of sound production. Two separate sets of phylogenetic characters were coded following acoustic versus mechanistic criteria, and exploratory treatments were made to compare their respective phylogenetic content in comparison with the molecular phylogeny ( Faivovich et al., 2005 ). We discuss the consequences of the acoustic/mechanistic mismatch in terms of significance of acoustic characters for phylogenetic and comparative studies; and the evolution of vocalizations in North American treefrogs. Considering only the acoustic structure of frog vocalizations can lead to misleading results in terms of both phylogenetic signal and evolution of vocalizations. In contrast, interpreting the acoustic signals with regard to the mechanism of sound production results in consistent phylogenetic information. The mechanistic coding also provides strong homologies for use in comparative studies of frog vocalizations, and to derive and test evolutionary hypotheses. © The Willi Hennig Society 2005.  相似文献   

16.
Antipredator vocalizations of social companions are important for facilitating long-term changes in the responses of prey to novel predator stimuli. However, dynamic variation in the time course of acoustic communication has important implications for learning of predator cues associated with auditory signals. While animals often experience acoustic signals simultaneously with predator cues, they may also at times experience signals and predator stimuli in succession. The ability to learn about stimuli that are perceived not only together, but also after, acoustic signals has the potential to expand the range of opportunities for learning about novel events. Earlier work in Indian mynahs ( Acridotheres tristis ) has revealed that subjects acquire a visual exploratory response to a novel avian mount after they have experienced it together with conspecific distress vocalizations, a call produced in response to seizure by a predator. The present study explored to what extent such learning occurred if the avian mount was experienced after, rather than simultaneously with, distress calls, such as might happen if call production is interrupted by prey death. Results showed that mynahs that experienced a novel avian mount simultaneously with the sound of distress calls exhibited a sustained exploratory response to the mount after training relative to before that was not apparent in birds that received distress calls and mount in succession. This finding suggests that vocal antipredator signals may only trigger learning of environmental stimuli with which they share some temporal overlap. Recipients may need to access complementary non-vocal cues from the prey victim to learn about predator stimuli that are perceived after vocal behaviour.  相似文献   

17.
Social groups of capybaras are stable and cohesive. The species’ vocal communication is complex and mediates social interaction. The click call is emitted in a variety of contexts by animals from all age groups, but differs among groups; its attributed function is to keep contact among animals. To evaluate the presence of individual characteristics in the click call of capybaras, we recorded the vocalizations emitted spontaneously by six adults kept either solitary or in groups. We selected and measured the acoustic parameters of 300 click call phrases, 50 per individual. The parameters were submitted to a discriminant function analysis that revealed a classification accuracy of 76.8 %. A General Linear Model analysis revealed significant differences among the six individuals, and post hoc results showed that differences between a given pair were different from those of any other pair. The acoustic parameters that most contributed to discriminate the individual calls were click interval duration and click duration, suggesting that temporal parameters are more important than frequency parameters for individuals’ discrimination. The findings of individual characteristics in the click calls indicate that these vocalizations can be used as vocal signatures during social interactions.  相似文献   

18.
The physiological mechanisms and acoustic principles underlying sound production in primates are important for analyzing and synthesizing primate vocalizations, for determining the range of calls that are physically producible, and for understanding primate communication in the broader comparative context of what is known about communication in other vertebrates. In this paper we discuss what is known about vocal production in nonhuman primates, relying heavily on models from speech and musical acoustics. We first describe the role of the lungs and larynx in generating the sound source, and then discuss the effects of the supralaryngeal vocal tract in modifying this source. We conclude that more research is needed to resolve several important questions about the acoustics of primate calls, including the nature of the vocal tract's contribution to call production. Nonetheless, enough is known to explore the implications of call acoustics for the evolution of primate communication. In particular, we discuss how anatomy and physiology may provide constraints resulting in “honest” acoustic indicators of body size. © 1995 Wiley-Liss, Inc.  相似文献   

19.
Nityananda V  Bee MA 《PloS one》2011,6(6):e21191
Vocal communication in crowded social environments is a difficult problem for both humans and nonhuman animals. Yet many important social behaviors require listeners to detect, recognize, and discriminate among signals in a complex acoustic milieu comprising the overlapping signals of multiple individuals, often of multiple species. Humans exploit a relatively small number of acoustic cues to segregate overlapping voices (as well as other mixtures of concurrent sounds, like polyphonic music). By comparison, we know little about how nonhuman animals are adapted to solve similar communication problems. One important cue enabling source segregation in human speech communication is that of frequency separation between concurrent voices: differences in frequency promote perceptual segregation of overlapping voices into separate “auditory streams” that can be followed through time. In this study, we show that frequency separation (ΔF) also enables frogs to segregate concurrent vocalizations, such as those routinely encountered in mixed-species breeding choruses. We presented female gray treefrogs (Hyla chrysoscelis) with a pulsed target signal (simulating an attractive conspecific call) in the presence of a continuous stream of distractor pulses (simulating an overlapping, unattractive heterospecific call). When the ΔF between target and distractor was small (e.g., ≤3 semitones), females exhibited low levels of responsiveness, indicating a failure to recognize the target as an attractive signal when the distractor had a similar frequency. Subjects became increasingly more responsive to the target, as indicated by shorter latencies for phonotaxis, as the ΔF between target and distractor increased (e.g., ΔF = 6–12 semitones). These results support the conclusion that gray treefrogs, like humans, can exploit frequency separation as a perceptual cue to segregate concurrent voices in noisy social environments. The ability of these frogs to segregate concurrent voices based on frequency separation may involve ancient hearing mechanisms for source segregation shared with humans and other vertebrates.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号