首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.  相似文献   

2.
Whereas the use of discrete pitch intervals is characteristic of most musical traditions, the size of the intervals and the way in which they are used is culturally specific. Here we examine the hypothesis that these differences arise because of a link between the tonal characteristics of a culture's music and its speech. We tested this idea by comparing pitch intervals in the traditional music of three tone language cultures (Chinese, Thai and Vietnamese) and three non-tone language cultures (American, French and German) with pitch intervals between voiced speech segments. Changes in pitch direction occur more frequently and pitch intervals are larger in the music of tone compared to non-tone language cultures. More frequent changes in pitch direction and larger pitch intervals are also apparent in the speech of tone compared to non-tone language cultures. These observations suggest that the different tonal preferences apparent in music across cultures are closely related to the differences in the tonal characteristics of voiced speech.  相似文献   

3.
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.  相似文献   

4.
Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.  相似文献   

5.
A common but none the less remarkable human faculty is the ability to recognize and reproduce familiar pieces of music. No two performances of a given piece will ever be acoustically identical, but a listener can perceive, in both, the same rhythmic and tonal relationships, and can judge whether a particular note or phrase was played out of time or out of tune. The problem considered in this lecture is that of describing the conceptual structures by which we represent Western classical music and the processes by which these structures are created. Some new hypotheses about the perception of rhythm and tonality have been cast in the form of a computer program which will transcribe a live keyboard performance of a classical melody into the equivalent of standard musical notation.  相似文献   

6.
C Jiang  JP Hamm  VK Lim  IJ Kirk  X Chen  Y Yang 《PloS one》2012,7(7):e41411
Pitch processing is a critical ability on which humans' tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources.  相似文献   

7.
Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.  相似文献   

8.
How does music induce or evoke feeling states in listeners? A number of mechanisms have been proposed for how sounds induce emotions, including innate auditory responses, learned associations and mirror neuron processes. Inspired by ethology, it is suggested that the ethological concepts of signals, cues and indices offer additional analytic tools for better understanding induced affect. It is proposed that ethological concepts help explain why music is able to induce only certain emotions, why some induced emotions are similar to the displayed emotion (whereas other induced emotions differ considerably from the displayed emotion), why listeners often report feeling mixed emotions and why only some musical expressions evoke similar responses across cultures.  相似文献   

9.
Relationship of skin temperature changes to the emotions accompanying music   总被引:1,自引:0,他引:1  
One hundred introductory psychology students were given tasks that caused their skin temperatures to either fall or rise. Then they listened to two musical selections, one of which they rated as evoking arousing, negative emotions while the other was rated as evoking calm, positive emotions. During the first musical selection that was presented, the arousing, negative emotion music terminated skin temperature increases and perpetuated skin temperature decreases, whereas the calm, positive emotion selection terminated skin temperature decreases and perpetuated skin temperature increases. During the second musical selection, skin temperature tended to increase whichever music was played; however, the increases were significant only during the calm, positive emotion music. It was concluded that music initially affects skin temperature in ways that can be predicted from affective rating scales, although the effect of some selections may depend upon what, if any, music had been previously heard.  相似文献   

10.
One hundred introductory psychology students were given tasks that caused their skin temperatures to either fall or rise. Then they listened to two musical selections, one of which they rated as evoking arousing, negative emotions while the other was rated as evoking calm, positive emotions. During the first musical selection that was presented, the arousing, negative emotion music terminated skin temperature increases and perpetuated skin temperature decreases, whereas the calm, positive emotion selection terminated skin temperature decreases and perpetuated skin temperature increases. During the second musical selection, skin temperature tended to increase whichever music was played; however, the increases were significant only during the calm, positive emotion music. It was concluded that music initially affects skin temperature in ways that can be predicted from affective rating scales, although the effect of some selections may depend upon what, if any, music had been previously heard.A portion of the research reported in this paper was presented at the annual meeting of the Biofeedback Society of California, Asilomar, California, 1983.  相似文献   

11.
This study explores listeners’ experience of music-evoked sadness. Sadness is typically assumed to be undesirable and is therefore usually avoided in everyday life. Yet the question remains: Why do people seek and appreciate sadness in music? We present findings from an online survey with both Western and Eastern participants (N = 772). The survey investigates the rewarding aspects of music-evoked sadness, as well as the relative contribution of listener characteristics and situational factors to the appreciation of sad music. The survey also examines the different principles through which sadness is evoked by music, and their interaction with personality traits. Results show 4 different rewards of music-evoked sadness: reward of imagination, emotion regulation, empathy, and no “real-life” implications. Moreover, appreciation of sad music follows a mood-congruent fashion and is greater among individuals with high empathy and low emotional stability. Surprisingly, nostalgia rather than sadness is the most frequent emotion evoked by sad music. Correspondingly, memory was rated as the most important principle through which sadness is evoked. Finally, the trait empathy contributes to the evocation of sadness via contagion, appraisal, and by engaging social functions. The present findings indicate that emotional responses to sad music are multifaceted, are modulated by empathy, and are linked with a multidimensional experience of pleasure. These results were corroborated by a follow-up survey on happy music, which indicated differences between the emotional experiences resulting from listening to sad versus happy music. This is the first comprehensive survey of music-evoked sadness, revealing that listening to sad music can lead to beneficial emotional effects such as regulation of negative emotion and mood as well as consolation. Such beneficial emotional effects constitute the prime motivations for engaging with sad music in everyday life.  相似文献   

12.
In an earlier study, we found that humans were able to categorize dog barks correctly, which were recorded in various situations. The acoustic parameters, like tonality, pitch and inter-bark time intervals, seemed to have a strong effect on how human listeners described the emotionality of these dog vocalisations. In this study, we investigated if the effect of the acoustic parameters of the dog bark is the same on the human listeners as we would expect it from studies in other mammalian species (for example, low, hoarse sounds indicating aggression; high pitched, tonal sounds indicating subordinance/fear). People with different experience with dogs were asked to describe the emotional content of several artificially assembled bark sequences on the basis of five emotional states (aggressiveness, fear, despair, playfulness, happiness). The selection of the barks was based on low, medium and high values of tonality and peak frequency. For assembling artificial bark sequences, we used short, middle or long inter-bark intervals. We found that humans with different levels of experience with dogs described the emotional content of the bark sequences quite similarly, and the extent of previous experience with the given breed (Mudi), or with dogs in general, did not cause characteristic differences in the emotionality scores. The scoring of the emotional content of the bark sequences was in accordance with the so-called Morton's structural–acoustic rules. Thus, low pitched barks were described as aggressive, and tonal and high pitched barks were scored as either fearful or desperate, but always without aggressiveness. In general, tonality of the bark sequence had much less effect than the pitch of the sounds. We found also that the inter-bark intervals had a strong effect on the emotionality of dog barks for the human listeners: bark sequences with short inter-bark intervals were scored as aggressive, but bark sequences with longer inter-bark intervals were scored with low values of aggression. High pitched bark sequences with long inter-bark intervals were considered happy and playful, independently from their tonality. These findings show that dog barks function as predicted by the structural–motivational rules developed for acoustic signals in other species, suggesting that dog barks may present a functional system for communication at least in the dog–human relationship. In sum it seems that many types of different emotions can be expressed with the variation of at least three acoustic parameters.  相似文献   

13.
A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.  相似文献   

14.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.  相似文献   

15.
General theoretic and applied studies on the effects of emotional and functional states on the acoustic parameters of human speech were analyzed. In most studies, the number of frequency, time, and power characteristics of vocalization were used as the most informative acoustic correlates of emotional and functional states. As a rule, sthenic states lead to the increase, and asthenic states induce the decrease in pitch, formant, and intensity. The relationship between acoustic parameters of speech and emotional and functional states was found, which depended on individual features of the subject and appeared as diverse changes in time and power parameters. For more accurate identification of psychoemotional state of an individual, the study of general tonal phonemes that are common and easily recognizable in different languages may be helpful. The research of acoustic correlates of individual speech parameters is a promising approach to diagnosis of functional and emotional states of a person using the vocalization parameters.  相似文献   

16.
In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature.  相似文献   

17.
Traditionally, electrodermal research measurements were taken from the non-dominant hand. This was considered a valid measurement of arousal for the whole body. Some, however argue for a complex and asynchronous electrodermal system in terms of lateral and dermatome differences in emotional responding. The present study measured skin conductance responses to emotionally laden musical stimuli from the left and right index and middle fingers, as well as the left and right plantar surface of right handed participants (N?=?39). The 7-s musical segments conveyed four emotional categories: fear, sadness, happiness and peacefulness. Our results suggest, that the electrodermal system responds to emotional musical stimuli in a lateralized manner on the palmar surfaces. Fear, sadness and peacefulness prompted right hand dominance while happiness elicited left hand dominant response. Lateralization of the palmar and plantar surfaces differed significantly. Moreover, an association between lateralization of the electrodermal system in response to fear and state anxiety was found. Results of the present study suggest that the electrodermal system displays lateral preferences, reacting with varying degree of intensity to different emotions. Apart from lateral differences, music induced emotions show dermatome differences as well. These findings fit well with Multiple Arousal Theory, and prompt for revaluating the notion of uniform electrodermal arousal.  相似文献   

18.
The ability to recognize emotions contained in facial expressions are affected by both affective traits and states and varies widely between individuals. While affective traits are stable in time, affective states can be regulated more rapidly by environmental stimuli, such as music, that indirectly modulate the brain state. Here, we tested whether a relaxing or irritating sound environment affects implicit processing of facial expressions. Moreover, we investigated whether and how individual traits of anxiety and emotional control interact with this process. 32 healthy subjects performed an implicit emotion processing task (presented to subjects as a gender discrimination task) while the sound environment was defined either by a) a therapeutic music sequence (MusiCure), b) a noise sequence or c) silence. Individual changes in mood were sampled before and after the task by a computerized questionnaire. Additionally, emotional control and trait anxiety were assessed in a separate session by paper and pencil questionnaires. Results showed a better mood after the MusiCure condition compared with the other experimental conditions and faster responses to happy faces during MusiCure compared with angry faces during Noise. Moreover, individuals with higher trait anxiety were faster in performing the implicit emotion processing task during MusiCure compared with Silence. These findings suggest that sound-induced affective states are associated with differential responses to angry and happy emotional faces at an implicit stage of processing, and that a relaxing sound environment facilitates the implicit emotional processing in anxious individuals.  相似文献   

19.
A better understanding of animal emotion is an important goal in disciplines ranging from neuroscience to animal welfare science. The conscious experience of emotion cannot be assessed directly, but neural, behavioural and physiological indicators of emotion can be measured. Researchers have used these measures to characterize how animals respond to situations assumed to induce discrete emotional states (e.g. fear). While advancing our understanding of specific emotions, this discrete emotion approach lacks an overarching framework that can incorporate and integrate the wide range of possible emotional states. Dimensional approaches that conceptualize emotions in terms of universal core affective characteristics (e.g. valence (positivity versus negativity) and arousal) can provide such a framework. Here, we bring together discrete and dimensional approaches to: (i) offer a structure for integrating different discrete emotions that provides a functional perspective on the adaptive value of emotional states, (ii) suggest how long-term mood states arise from short-term discrete emotions, how they also influence these discrete emotions through a bi-directional relationship and how they may function to guide decision-making, and (iii) generate novel hypothesis-driven measures of animal emotion and mood.  相似文献   

20.

Background

Studies of cross-cultural variations in the perception of emotion have typically compared rates of recognition of static posed stimulus photographs. That research has provided evidence for universality in the recognition of a range of emotions but also for some systematic cross-cultural variation in the interpretation of emotional expression. However, questions remain about how widely such findings can be generalised to real life emotional situations. The present study provides the first evidence that the previously reported interplay between universal and cultural influences extends to ratings of natural, dynamic emotional stimuli.

Methodology/Principal Findings

Participants from Northern Ireland, Serbia, Guatemala and Peru used a computer based tool to continuously rate the strength of positive and negative emotion being displayed in twelve short video sequences by people from the United Kingdom engaged in emotional conversations. Generalized additive mixed models were developed to assess the differences in perception of emotion between countries and sexes. Our results indicate that the temporal pattern of ratings is similar across cultures for a range of emotions and social contexts. However, there are systematic differences in intensity ratings between the countries, with participants from Northern Ireland making the most extreme ratings in the majority of the clips.

Conclusions/Significance

The results indicate that there is strong agreement across cultures in the valence and patterns of ratings of natural emotional situations but that participants from different cultures show systematic variation in the intensity with which they rate emotion. Results are discussed in terms of both ‘in-group advantage’ and ‘display rules’ approaches. This study indicates that examples of natural spontaneous emotional behaviour can be used to study cross-cultural variations in the perception of emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号