首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.  相似文献   

2.
In witnessing face-to-face conversation, observers perceive authentic communication according to the social contingency of nonverbal feedback cues (‘back-channeling’) by non-speaking interactors. The current study investigated the generality of this function by focusing on nonverbal communication in musical improvisation. A perceptual experiment was conducted to test whether observers can reliably identify genuine versus fake (mismatched) duos from musicians’ nonverbal cues, and how this judgement is affected by observers’ musical background and rhythm perception skill. Twenty-four musicians were recruited to perform duo improvisations, which included solo episodes, in two styles: standard jazz (where rhythm is based on a regular pulse) or free improvisation (where rhythm is non-pulsed). The improvisations were recorded using a motion capture system to generate 16 ten-second point-light displays (with audio) of the soloist and the silent non-soloing musician (‘back-channeler’). Sixteen further displays were created by splicing soloists with back-channelers from different duos. Participants (N = 60) with various musical backgrounds were asked to rate the point-light displays as either real or fake. Results indicated that participants were sensitive to the real/fake distinction in the free improvisation condition independently of musical experience. Individual differences in rhythm perception skill did not account for performance in the free condition, but were positively correlated with accuracy in the standard jazz condition. These findings suggest that the perception of back-channeling in free improvisation is not dependent on music-specific skills but is a general ability. The findings invite further study of the links between interpersonal dynamics in conversation and musical interaction.  相似文献   

3.
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.  相似文献   

4.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.  相似文献   

5.
Relationship of skin temperature changes to the emotions accompanying music   总被引:1,自引:0,他引:1  
One hundred introductory psychology students were given tasks that caused their skin temperatures to either fall or rise. Then they listened to two musical selections, one of which they rated as evoking arousing, negative emotions while the other was rated as evoking calm, positive emotions. During the first musical selection that was presented, the arousing, negative emotion music terminated skin temperature increases and perpetuated skin temperature decreases, whereas the calm, positive emotion selection terminated skin temperature decreases and perpetuated skin temperature increases. During the second musical selection, skin temperature tended to increase whichever music was played; however, the increases were significant only during the calm, positive emotion music. It was concluded that music initially affects skin temperature in ways that can be predicted from affective rating scales, although the effect of some selections may depend upon what, if any, music had been previously heard.  相似文献   

6.
One hundred introductory psychology students were given tasks that caused their skin temperatures to either fall or rise. Then they listened to two musical selections, one of which they rated as evoking arousing, negative emotions while the other was rated as evoking calm, positive emotions. During the first musical selection that was presented, the arousing, negative emotion music terminated skin temperature increases and perpetuated skin temperature decreases, whereas the calm, positive emotion selection terminated skin temperature decreases and perpetuated skin temperature increases. During the second musical selection, skin temperature tended to increase whichever music was played; however, the increases were significant only during the calm, positive emotion music. It was concluded that music initially affects skin temperature in ways that can be predicted from affective rating scales, although the effect of some selections may depend upon what, if any, music had been previously heard.A portion of the research reported in this paper was presented at the annual meeting of the Biofeedback Society of California, Asilomar, California, 1983.  相似文献   

7.
A set of computerized tasks was used to investigate sex differences in the speed and accuracy of emotion recognition in 62 men and women of reproductive age. Evolutionary theories have posited that female superiority in the perception of emotion might arise from women's near-universal responsibility for child-rearing. Two variants of the child-rearing hypothesis predict either across-the-board female superiority in the discrimination of emotional expressions (“attachment promotion” hypothesis) or a female superiority that is restricted to expressions of negative emotion (“fitness threat” hypothesis). Therefore, we sought to evaluate whether the expression of the sex difference is influenced by the valence of the emotional signal (Positive or Negative). The results showed that women were faster than men at recognizing both positive and negative emotions from facial cues, supporting the attachment promotion hypothesis. Support for the fitness threat hypothesis also was found, in that the sex difference was accentuated for negative emotions. There was no evidence that the female superiority was learned through previous childcare experience or that it was derived from a sex difference in simple perceptual speed. The results suggest that evolved mechanisms, not domain-general learning, underlie the sex difference in recognition of facial emotions.  相似文献   

8.
Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.  相似文献   

9.
Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology.  相似文献   

10.
CSS Tloskov is a social pediatric care center and a leading institution in the Czech Republic. Sixty-five percent of its clients are diagnosed with autism spectrum disorder (ASD) and receive usually music therapy as a main constituent of individually designed pedagogical and therapeutic programs. In contrast to numerous music therapeutic concepts that are based on musical improvisation, the Tloskov model advocates a complex approach involving favorite songs, instrumental improvisation, and body-oriented modalities such as muscle relaxation and breathing techniques.

Clinical analyses allow us to distinguish typical psychiatric exacerbations in our ASD-clients. These “autistic crises” comprise an “onset phase,” a “gradation phase,” a “culmination phase,” and a “subsiding phase,” which can be partly controlled by music therapeutic interventions. On the basis of Grounded Theory we used qualitative methods to examine system compatibility between clinical data and the 4-phase autism crisis theory and to generate hypotheses about mechanisms of successful music therapy.

Outcomes involve five main principles: identification and avoidance of specific stimuli and cues that trigger autism crises; direct musical “sedation”; acquisition of music-behavioral skills to “auto-regulate” pathological developments; and a sort of music therapeutic emotional re-balancing and consolidation of an inner equilibrium. The “right moment” of intervention and adjustment of musical experiences within a narrow range of the client’s aesthetic-emotional intensity tolerance are critical to therapeutic outcomes. Possible music therapeutic contra-indications have to be taken into consideration.  相似文献   

11.
This paper points to a convergence of formal and rhetorical features in ancient Chinese cosmobiological theory, within which is developed a view of the inner life of human emotions. Inasmuch as there is an extensive classical tradition considering the emotions in conjunction with music, one can justify a structural analysis of medical texts treating disorder in emotional life, since emotions, musical interpretation and structural analysis all deal with systems interrelated in a transformational space largely independent of objective reference and propositional coordination. Following a section of ethnolinguistic sketches to provide grounds in some phenomenological worlds recognized by Chinese people, there is a textual analysis of a classical medical source for the treatment of emotional distress. Through close examination of the compositional schema of this text, it can be demonstrated that the standard categories of correlative cosmology are arrayed within a more comprehensive structural order.  相似文献   

12.
A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.  相似文献   

13.
We present an EEG study of two music improvisation experiments. Professional musicians with high level of improvisation skills were asked to perform music either according to notes (composed music) or in improvisation. Each piece of music was performed in two different modes: strict mode and “let-go” mode. Synchronized EEG data was measured from both musicians and listeners. We used one of the most reliable causality measures: conditional Mutual Information from Mixed Embedding (MIME), to analyze directed correlations between different EEG channels, which was combined with network theory to construct both intra-brain and cross-brain networks. Differences were identified in intra-brain neural networks between composed music and improvisation and between strict mode and “let-go” mode. Particular brain regions such as frontal, parietal and temporal regions were found to play a key role in differentiating the brain activities between different playing conditions. By comparing the level of degree centralities in intra-brain neural networks, we found a difference between the response of musicians and the listeners when comparing the different playing conditions.  相似文献   

14.
The development of musical skills by musicians results in specific structural and functional modifications in the brain. Surprisingly, no functional magnetic resonance imaging (fMRI) study has investigated the impact of musical training on brain function during long-term memory retrieval, a faculty particularly important in music. Thus, using fMRI, we examined for the first time this process during a musical familiarity task (i.e., semantic memory for music). Musical expertise induced supplementary activations in the hippocampus, medial frontal gyrus, and superior temporal areas on both sides, suggesting a constant interaction between episodic and semantic memory during this task in musicians. In addition, a voxel-based morphometry (VBM) investigation was performed within these areas and revealed that gray matter density of the hippocampus was higher in musicians than in nonmusicians. Our data indicate that musical expertise critically modifies long-term memory processes and induces structural and functional plasticity in the hippocampus.  相似文献   

15.
Cerebral mechanisms of musical abilities were explored in musically gifted children. For this purpose, psychophysiological characteristics of perception of emotional speech information were experimentally studied in samples of gifted and ordinary children. Forty six schoolchildren and forty eight musicians of three age groups (7-10, 11-13 and 14-17 years old) participated in the study. In experimental session, a test sentence was presented to a subject through headphones with two emotional intonations (joy and anger) and without emotional expression. A subject had to recognize the type of emotion. His/her answers were recorded. The analysis of variance revealed age- and gender-related features of emotional recognition: boys musicians led the schoolchildren of the same age by 4-6 years in the development of mechanisms of emotional recognition, whereas girls musicians were 1-3 years ahead. Musical education in girls induced the shift of predominant activities for emotional perception in the left hemisphere; in boys, on the contrary, initial distinct dominance of the left hemisphere was not retained in the process of further education.  相似文献   

16.
The current study explored the influence of musical expertise, and specifically training in improvisation on creativity, using the framework of the twofold model, according to which creativity involves a process of idea generation and idea evaluation. Based on the hypothesis that a strict evaluation phase may have an inhibiting effect over the generation phase, we predicted that training in improvisation may have a “releasing effect” on the evaluation system, leading to greater creativity. To examine this hypothesis, we compared performance among three groups - musicians trained in improvisation, musicians not trained in improvisation, and non-musicians - on divergent thinking tasks and on their evaluation of creativity. The improvisation group scored higher on fluency and originality compared to the other two groups. Among the musicians, evaluation of creativity mediated how experience in improvisation was related to originality and fluency scores. It is concluded that deliberate practice of improvisation may have a “releasing effect” on creativity.  相似文献   

17.
The existing empirical literature suggests that during difficult situations, the concurrent experience of positive and negative affects may be ideal for ensuring successful adaptation and well-being. However, different patterns of mixed emotions may have different adaptive consequences. The present research tested the proposition that experiencing a pattern of secondary mixed emotion (i.e., secondary emotion that embrace both positive and negative affects) more greatly promotes adaptive coping than experiencing two other patterns of mixed emotional experiences: simultaneous (i.e., two emotions of opposing affects taking place at the same time) and sequential (i.e., two emotions of opposing affects switching back and forth). Support for this hypothesis was obtained from two experiments (Studies 1 and 2) and a longitudinal survey (Study 3). The results revealed that secondary mixed emotions predominate over sequential and simultaneous mixed emotional experiences in promoting adaptive coping through fostering the motivational and informative functions of emotions; this is done by providing solution-oriented actions rather than avoidance, faster decisions regarding coping strategies (Study 1), easier access to self-knowledge, and better narrative organization (Study 2). Furthermore, individuals characterized as being prone to feeling secondary mixed emotions were more resilient to stress caused by transitions than those who were characterized as being prone to feeling opposing emotions separately (Study 3). Taken together, the preliminary results indicate that the pattern of secondary mixed emotion provides individuals with a higher capacity to handle adversity than the other two patterns of mixed emotional experience.  相似文献   

18.
19.
How does music induce or evoke feeling states in listeners? A number of mechanisms have been proposed for how sounds induce emotions, including innate auditory responses, learned associations and mirror neuron processes. Inspired by ethology, it is suggested that the ethological concepts of signals, cues and indices offer additional analytic tools for better understanding induced affect. It is proposed that ethological concepts help explain why music is able to induce only certain emotions, why some induced emotions are similar to the displayed emotion (whereas other induced emotions differ considerably from the displayed emotion), why listeners often report feeling mixed emotions and why only some musical expressions evoke similar responses across cultures.  相似文献   

20.
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号