首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Reaction time and recognition accuracy of speech emotional intonations in short meaningless words that differed only in one phoneme with background noise and without it were studied in 49 adults of 20-79 years old. The results were compared with the same parameters of emotional intonations in intelligent speech utterances under similar conditions. Perception of emotional intonations at different linguistic levels (phonological and lexico-semantic) was found to have both common features and certain peculiarities. Recognition characteristics of emotional intonations depending on gender and age of listeners appeared to be invariant with regard to linguistic levels of speech stimuli. Phonemic composition of pseudowords was found to influence the emotional perception, especially against the background noise. The most significant stimuli acoustic characteristic responsible for the perception of speech emotional prosody in short meaningless words under the two experimental conditions, i.e. with and without background noise, was the fundamental frequency variation.  相似文献   

2.
Shrews have rich vocal repertoires that include vocalizations within the human audible frequency range and ultrasonic vocalizations. Here, we recorded and analyzed in detail the acoustic structure of a vocalization with unclear functional significance that was spontaneously produced by 15 adult, captive Asian house shrews (Suncus murinus) while they were lying motionless and resting in their nests. This vocalization was usually emitted repeatedly in a long series with regular intervals. It showed some structural variability; however, the shrews most frequently emitted a tonal, low-frequency vocalization with minimal frequency modulation and a low, non-vocal click that was clearly noticeable at its beginning. There was no effect of sex, but the acoustic structure of the analyzed vocalizations differed significantly between individual shrews. The encoded individuality was low, but it cannot be excluded that this individuality would allow discrimination of family members, i.e., a male and female with their young, collectively resting in a common nest. The question remains whether the Asian house shrews indeed perceive the presence of their mates, parents or young resting in a common nest via the resting-associated vocalization and whether they use it to discriminate among their family members. Additional studies are needed to explain the possible functional significance of resting-associated vocalizations emitted by captive Asian house shrews. Our study highlights that the acoustic communication of shrews is a relatively understudied topic, particularly considering that they are highly vocal mammals.  相似文献   

3.
Selective breeding and natural selection that select for one trait often bring along other correlated traits via coselection. Selective breeding for an infantile trait, high or low call rates of isolation‐induced ultrasonic vocalization of rat pups, also alters functions of some brain systems and emotional behaviors throughout life. We examined the effect of breeding for call rate on acoustic parameters that are of communicative significance. Selecting for higher call rate produced calls of significantly increased amplitude and bandwidth relative to a randomly bred line. Selecting for lower rate produced calls of decreased duration. These nonmorphological, functional trait changes demonstrate enhanced communicatory potential and energy expenditure for the High line and the opposite for the Low line. This demonstration of coselection in a communicatory system suggests an underlying heritable suite of linked acoustic vocalization characteristics that in noisy environments could enhance dam–pup communication and lead to selection of emotionality traits with beneficial responses to stress.  相似文献   

4.

Background

Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance.

Methodology/Principal findings

Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise.

Conclusion/Significance

The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds.  相似文献   

5.
All species in the genus Macaca produce a set of harmonically rich vocalizations known as “coos”. Extensive acoustic variation occurs within this call type, a large proportion of which is thought to be associated with different social contexts such as mother-infant separation and the discovery of food. Prior studies of these calls have not taken into account the potential contributions of individual differences and changes in emotional or motivational state. To understand the function of a call and the perceptual salience of different acoustic features, however, it is important to determine the different sources of acoustic variation. I present data on the rhesus macaques' (M. mulatta) coo vocalization and attempt to establish some of the causes of acoustic variation. A large proportion of the variation observed was due to differences between individuals and to putative changes in arousal, not to differences in social context. Specifically, results from a discriminant-function analysis indicated that coo exemplars were accurately assigned to the appropriate individual, but vocal “signatures” were more variable in some contexts than in others. Moreover, vocal signatures may not always be reliable cues to caller identity because closely related individuals sound alike. Rhesus macaque coos evidently provide sufficient acoustic information for individual recognition and possibly kin recognition, but are unlikely to provide sufficient information about an external referent.  相似文献   

6.
7.
In an earlier study, we found that humans were able to categorize dog barks correctly, which were recorded in various situations. The acoustic parameters, like tonality, pitch and inter-bark time intervals, seemed to have a strong effect on how human listeners described the emotionality of these dog vocalisations. In this study, we investigated if the effect of the acoustic parameters of the dog bark is the same on the human listeners as we would expect it from studies in other mammalian species (for example, low, hoarse sounds indicating aggression; high pitched, tonal sounds indicating subordinance/fear). People with different experience with dogs were asked to describe the emotional content of several artificially assembled bark sequences on the basis of five emotional states (aggressiveness, fear, despair, playfulness, happiness). The selection of the barks was based on low, medium and high values of tonality and peak frequency. For assembling artificial bark sequences, we used short, middle or long inter-bark intervals. We found that humans with different levels of experience with dogs described the emotional content of the bark sequences quite similarly, and the extent of previous experience with the given breed (Mudi), or with dogs in general, did not cause characteristic differences in the emotionality scores. The scoring of the emotional content of the bark sequences was in accordance with the so-called Morton's structural–acoustic rules. Thus, low pitched barks were described as aggressive, and tonal and high pitched barks were scored as either fearful or desperate, but always without aggressiveness. In general, tonality of the bark sequence had much less effect than the pitch of the sounds. We found also that the inter-bark intervals had a strong effect on the emotionality of dog barks for the human listeners: bark sequences with short inter-bark intervals were scored as aggressive, but bark sequences with longer inter-bark intervals were scored with low values of aggression. High pitched bark sequences with long inter-bark intervals were considered happy and playful, independently from their tonality. These findings show that dog barks function as predicted by the structural–motivational rules developed for acoustic signals in other species, suggesting that dog barks may present a functional system for communication at least in the dog–human relationship. In sum it seems that many types of different emotions can be expressed with the variation of at least three acoustic parameters.  相似文献   

8.
In this study, we present a methodology that identifies acoustic units in Gunnison's prairie dog alarm calls and then uses those units to classify the alarm calls and bouts according to the species of predator that was present when the calls were vocalized. While traditional methods measure specific acoustic parameters in order to describe a vocalization, our method uses the variation in the internal structure of a vocalization to define possible information structures. Using a simple representation similar to that used in human speech to identify vowel sounds, a software system was developed that uses this representation to recognize acoustic units in prairie dog alarm calls. These acoustic units are then used to classify alarm calls and their associated bouts according to the species of predator that was present when the alarm calls were vocalized. Identification of bouts with up to 100% accuracy was obtained. This work represents a first step toward revealing the details of how information is encoded in a complex nonhuman communication system. Furthermore, the techniques discussed in this paper are not restricted to a database of prairie dog alarm calls. They could be applied to any animal whose vocalizations include multiple simultaneous frequencies.  相似文献   

9.
In this paper we report the results obtained from experiments with a database of emotional speech in English in order to find the most important acoustic features to estimate Emotion Primitives which determine the emotional content on speech. We are interested in exploiting the potential benefits of continuous emotion models, so in this paper we demonstrate the feasibility of applying this approach to annotation of emotional speech and we explore ways to take advantage of this kind of annotation to improve the automatic classification of basic emotions.  相似文献   

10.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.  相似文献   

11.
Limited information is available regarding the acoustic communication of Antillean manatees, however, studies have shown that other manatee taxa produce vocalizations as a method of individual recognition and communication. Here, the acoustic signals of 15 Antillean manatees in captivity were recorded, aiming to (1) describe their acoustic repertoire, (2) investigate the influence of sex and age on vocalization, and (3) examine manatee responses to call playback. Six acoustic signals ranging in mean fundamental frequencies from 0.64 kHz to 5.23 kHz were identified: squeaks and screeches were common to adult males, adult females, and juveniles; trills were common to adult males and females; whines were specific to males; creaks were specific to females; and rubbing was specific to juveniles. The structure of squeak vocalizations was significantly different between age and sex classes and screech structure was significantly different between age classes. Squeaks and screeches produced by juveniles had higher frequencies of maximum energy when compared to those produced by adult males and females. A significant increase in the vocalization rate following vocalization playbacks was found for all three age/sex groups. Our results introduce the potential of using acoustic signals in identifying and noninvasively monitoring manatees in the wild in Brazil.  相似文献   

12.
Individually distinct communication signals (‘signatures’) have been documented in a variety of taxa across signal modalities and they serve a host of important functions. However, studies have rarely examined the temporal stability of these signals. Cooperatively breeding species, such as marmosets and tamarins, are characterized by long-term group membership, complex social organization, and high levels of interindividual coordination of behaviour. These social attributes may promote complex, individually distinct and stable acoustic signals to facilitate the expression of cooperative behaviour. In this study, the long calls of socially housed individual Wied's black tufted-ear marmosets, Callithrix kuhli, were examined for a ‘signature system’ potentially important in such interactions. Vocalizations were recorded at three different times (1993, 1995, 1996), digitized, and then measured by spectrographic analysis. Acoustic and temporal features of the calls were examined, including number of syllables, length of syllables, intersyllable interval, frequency range, start/stop frequency, peak frequency, and total call duration. A number of significant intra-individual changes in acoustic parameters were identified across the recording periods. Discriminant analysis revealed that many variables contributed to differentiation among individuals, and average classification accuracy for calls within a given year was high, ranging from 91.7% to 93.5%. However, reclassification accuracy for calls between-years was much poorer, averaging less than 50%. In addition, classification confidence was higher for within-year scores in contrast to the between-year values. Thus, tufted-ear marmosets have an individually distinct vocalization which is acoustically modified across time. Our finding suggests that to the extent that the vocalization is used for individual recognition, recognition mechanisms must be modified over time as well.  相似文献   

13.
Loss of acoustic habitat due to anthropogenic noise is a key environmental stressor for vocal amphibian species, a taxonomic group that is experiencing global population declines. The Pacific chorus frog (Pseudacris regilla) is the most common vocal species of the Pacific Northwest and can occupy human‐dominated habitat types, including agricultural and urban wetlands. This species is exposed to anthropogenic noise, which can interfere with vocalizations during the breeding season. We hypothesized that Pacific chorus frogs would alter the spatial and temporal structure of their breeding vocalizations in response to road noise, a widespread anthropogenic stressor. We compared Pacific chorus frog call structure and ambient road noise levels along a gradient of road noise exposures in the Willamette Valley, Oregon, USA. We used both passive acoustic monitoring and directional recordings to determine source level (i.e., amplitude or volume), dominant frequency (i.e., pitch), call duration, and call rate of individual frogs and to quantify ambient road noise levels. Pacific chorus frogs were unable to change their vocalizations to compensate for road noise. A model of the active space and time (“spatiotemporal communication”) over which a Pacific chorus frog vocalization could be heard revealed that in high‐noise habitats, spatiotemporal communication was drastically reduced for an individual. This may have implications for the reproductive success of this species, which relies on specific call repertoires to portray relative fitness and attract mates. Using the acoustic call parameters defined by this study (frequency, source level, call rate, and call duration), we developed a simplified model of acoustic communication space–time for this species. This model can be used in combination with models that determine the insertion loss for various acoustic barriers to define the impact of anthropogenic noise on the radius of communication in threatened species. Additionally, this model can be applied to other vocal taxonomic groups provided the necessary acoustic parameters are determined, including the frequency parameters and perception thresholds. Reduction in acoustic habitat by anthropogenic noise may emerge as a compounding environmental stressor for an already sensitive taxonomic group.  相似文献   

14.
This study examined whether piglet distress vocalizations vary with age, body weight and health status, according to the predictions of the honest signalling of need evolutionary model. Vocalizations were recorded during manual squeezing (a simulation of being crushed by mother sow) and during isolation on Days 1 and 7 after birth in piglets from 15 litters. We predicted that during squeezing, younger, lighter and sick piglets would call more intensely because they are in higher risk of dying during crushing and therefore they benefit more from the sow’s reaction to intensive vocalization. For isolation, we predicted that lighter and younger piglets would call more because they are more vulnerable to adverse effects of the separation. Calls were analyzed in their time and frequency domain. The rate of calling, call duration, proportion of high-pitched calls and eight acoustic parameters characterizing frequency distribution and tonality were used as indicators of acoustic signalling intensity. Piglets that experienced “squeezing” on Day 1 produced more intense acoustic distress signalling than on Day 7. Lighter piglets called more during squeezing than heavier piglets. Health status did not significantly affect any of the indicators of intensity of vocalization during squeezing. In isolation, none of the parameters of vocalization intensity were affected either by the age or by the weight of the piglets. In summary, the model of honest signalling of need was confirmed in the squeezed situation, but not in the isolation situation.  相似文献   

15.
The source-filter theory of vocal production supports the idea that acoustic signatures are preferentially coded by the fundamental frequency (source-induced variability) and the distribution of energy among the frequency spectrum (filter-induced variability). By investigating the acoustic parameters supporting individuality in lamb bleats, a vocalization which mediates recognition by ewes, here we show that amplitude modulation – an acoustic feature largely independent of the shape of the acoustic tract – can also be an important cue defining an individual vocal signature. Female sheep (Ovis aries) show an acoustic preference for their own lamb. Although playback experiments have shown that this preference is established soon after birth and relies on a unique vocal signature contained in the bleats of the lamb, the physical parameters that encode this individual identity remained poorly identified. We recorded 152 bleats from 13 fifteen-day-old lambs and analyzed their acoustic structure with four complementary statistical methods (ANOVA, potential for individual identity coding PIC, entropy calculation 2Hs, discriminant function analysis DFA). Although there were slight differences in the acoustic parameters identified by the four methods, it remains that the individual signature relies on both the temporal and frequency domains. The coding of the identity is thus multi-parametric and integrates modulation of amplitude and energy parameters. Specifically, the contribution of the amplitude modulation is important, together with the fundamental frequency F 0 and the distribution of energy in the frequency spectrum.  相似文献   

16.
We carried out a comparative study of spectral-prosodic characteristics of bird vocalization and human speech. Comparison was made between the relative characteristics of the fundamental frequency and spectral maxima. Criteria were formulated for the comparison of bird's signals and human speech. A certain correspondence was found between the vocal structures of birds and humans. It was proposed that in the course of evolution, man adopted the main structural principles of his acoustic signalling from birds.  相似文献   

17.
Acute and chronic electromyographic (EMG) recordings from individual syringeal muscles were used to study syringeal participation in respiration and vocalization. In anesthetized birds, all syringeal muscles recorded were active to some degree during the expiratory phase of respiration, following activity in the abdominal musculature and preceding the emergence of breath from the nostril. In awake birds, the ventralis (V) muscle fired a strong, consistent burst, but the dorsalis (D) was variable both in strength and timing. Denervation of V is sufficient to produce the wheezing respiration originally seen in birds with complete bilateral section of the tracheosyringeal nerve. Complete syringeal denervation also removed almost all the acoustic features that distinguish individual song syllables, but had a minor effect on the temporal structure of song. When activity in V and D was recorded in awake, vocalizing birds, D was active before and during sound production, and V showed a small burst before sound onset and a vigorous burst timed to the termination of sound. During song, V was consistently active at sound offset, but also participated during sound for narrow bandwidth syllables. For some syllables (simple harmonic stacks), neither muscle was active. These data suggest that V contributes to syllable termination during vocalization and may silence the syrinx during normal respiration. D contributes to the acoustic structure of most syllables, and V may contribute to a special subset of syllables. In summary, the syringeal muscles show different activity patterns during respiration and vocalization and can be independently activated during vocalization, depending on the syllable produced.  相似文献   

18.
Phee calls were recorded from five captive common marmosets on three occasions. An initial recording session was followed by further sessions 1–12 days later, and finally, 12 months after the initial sample. Sonograms from the first recordings were measured using one duration and five frequency parameters, and significant differences between individuals were found for all six parameters. Discriminant function analysis was then applied to classify each call to a particular individual, witn a resulting classification accuracy of 97.27%. Analysis of the second and third recordings demonstrated accurate classification to the same caller using the measurements obtained from the initial sample. The accuracy remained high despite intra-individual differences in acoustic structure among the three recording periods. Such differences may well reflect proximate changes in the underlying arousal state of the caller. Stability over time in the vocal signature of the phee call supports the view that this vocalization may be important in signalling individual identity over long distances, in a habitat where visual contact is limited. © 1993 Wiley-Liss, Inc.  相似文献   

19.
Adult male rats subjected to a two-way avoidance task emitted ultrasonic vocalizations (20-30 kHz) both during the presentation of the conditioned stimulus and the intertrial interval. The rate of ultrasonic calling decreased during the 75-trial session indicating that acquisition of the conditioned avoidance response (CAR) was inversely correlated with the rate of vocalization. The rate of acquisition of the CAR was most rapid in those rats that did not emit any vocalization during learning. These data suggest that ultrasonic calling during stressful situations may be sensitive indicator of underlying emotional states that interfere with the acquisition of a complex task.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号