首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.  相似文献   

2.
Rhythmic entrainment, or beat synchronization, provides an opportunity to understand how multiple systems operate together to integrate sensory-motor information. Also, synchronization is an essential component of musical performance that may be enhanced through musical training. Investigations of rhythmic entrainment have revealed a developmental trajectory across the lifespan, showing synchronization improves with age and musical experience. Here, we explore the development and maintenance of synchronization in childhood through older adulthood in a large cohort of participants (N = 145), and also ask how it may be altered by musical experience. We employed a uniform assessment of beat synchronization for all participants and compared performance developmentally and between individuals with and without musical experience. We show that the ability to consistently tap along to a beat improves with age into adulthood, yet in older adulthood tapping performance becomes more variable. Also, from childhood into young adulthood, individuals are able to tap increasingly close to the beat (i.e., asynchronies decline with age), however, this trend reverses from younger into older adulthood. There is a positive association between proportion of life spent playing music and tapping performance, which suggests a link between musical experience and auditory-motor integration. These results are broadly consistent with previous investigations into the development of beat synchronization across the lifespan, and thus complement existing studies and present new insights offered by a different, large cross-sectional sample.  相似文献   

3.
At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop in infancy but also for our understanding of how they may have evolved.  相似文献   

4.
Theories of music evolution agree that human music has an affective influence on listeners. Tests of non-humans provided little evidence of preferences for human music. However, prosodic features of speech (‘motherese’) influence affective behaviour of non-verbal infants as well as domestic animals, suggesting that features of music can influence the behaviour of non-human species. We incorporated acoustical characteristics of tamarin affiliation vocalizations and tamarin threat vocalizations into corresponding pieces of music. We compared music composed for tamarins with that composed for humans. Tamarins were generally indifferent to playbacks of human music, but responded with increased arousal to tamarin threat vocalization based music, and with decreased activity and increased calm behaviour to tamarin affective vocalization based music. Affective components in human music may have evolutionary origins in the structure of calls of non-human animals. In addition, animal signals may have evolved to manage the behaviour of listeners by influencing their affective state.  相似文献   

5.
Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology.  相似文献   

6.
In this study we explore how music can entrain human walkers to synchronise to the musical beat without being instructed to do so. For this, we use an interactive music player, called D-Jogger, that senses the user''s walking tempo and phase. D-Jogger aligns the music by manipulating the timing difference between beats and footfalls. Experiments are reported that led to the development and optimisation of four alignment strategies. The first strategy matched the music''s tempo continuously to the runner''s pace. The second strategy matched the music''s tempo at the beginning of a song to the runner''s pace, keeping the tempo constant for the remainder of the song. The third alignment starts a song in perfect phase synchrony and continues to adjust the tempo to match the runner''s pace. The fourth and last strategy additionally adjusts the phase of the music so each beat matches a footfall. The first two strategies resulted in a minor increase of steps in phase synchrony with the main beat when compared to a random playlist, the last two strategies resulted in a strong increase in synchronised steps. These results may be explained in terms of phase-error correction mechanisms and motor prediction schemes. Finding the phase-lock is difficult due to fluctuations in the interaction, whereas strategies that automatically align the phase between movement and music solve the problem of finding the phase-locking. Moreover, the data show that once the phase-lock is found, alignment can be easily maintained, suggesting that less entrainment effort is needed to keep the phase-lock, than to find the phase-lock. The different alignment strategies of D-Jogger can be applied in different domains such as sports, physical rehabilitation and assistive technologies for movement performance.  相似文献   

7.
Parkinson''s disease (PD) results in movement and sensory impairments that can be reduced by familiar music. At present, it is unclear whether the beneficial effects of music are limited to lessening the bradykinesia of whole body movement or whether beneficial effects also extend to skilled movements of PD subjects. This question was addressed in the present study in which control and PD subjects were given a skilled reaching task that was performed with and without accompanying preferred musical pieces. Eye movements and limb use were monitored with biomechanical measures and limb movements were additionally assessed using a previously described movement element scoring system. Preferred musical pieces did not lessen limb and hand movement impairments as assessed with either the biomechanical measures or movement element scoring. Nevertheless, the PD patients with more severe motor symptoms as assessed by Hoehn and Yahr (HY) scores displayed enhanced visual engagement of the target and this impairment was reduced during trials performed in association with accompanying preferred musical pieces. The results are discussed in relation to the idea that preferred musical pieces, although not generally beneficial in lessening skilled reaching impairments, may normalize the balance between visual and proprioceptive guidance of skilled reaching.  相似文献   

8.
Music or other background sounds are often played in barns as environmental enrichment for animals on farms or to mask sudden disruptive noises. Previous studies looking at the effects of this practice on non-human animal well-being and productivity have found contradictory results. However, there is still a lack of discussion on whether piglets have the ability to distinguish different types of music. In this study, we exposed piglets to different music conditions to investigate whether the piglets preferred certain music types, in which case those types would have the potential to be used as environmental enrichment. In total, 30 piglets were tested for music type preference to determine whether growing pigs respond differently to different types of music. We used music from two families of instruments (S: string, W: wind) and with two tempos (S: slow, 65 beats/min (bpm); F: fast, 200 bpm), providing four music-type combinations (SS: string-slow; SF: string-fast; WS: wind-slow; WF: wind-fast). The piglets were given a choice between two chambers, one with no music and the other with one of the four types of music, and their behaviour was observed. The results showed that SS and WF music significantly increased residence time (P<0.01) compared with the other music conditions. Compared with the control group (with no music), the different music conditions led to different behavioural responses, where SS music significantly increased lying (P<0.01) and exploration behaviour (P<0.01); SF music significantly increased tail-wagging behaviour (P<0.01); WS music significantly increased exploration (P<0.01); and WF music significantly increased walking, lying, standing and exploration (all P<0.01). The results also showed that musical instruments and tempo had little effect on most of the behaviours. Fast-tempo music significantly increased walking (P=0.02), standing (P<0.01) and tail wagging (P=0.04) compared with slow-tempo music. In conclusion, the results of this experiment show that piglets are more sensitive to tempo than to musical instruments in their response to musical stimulation and seem to prefer SS and WF music to the other two types. The results also suggest a need for further research on the effect of music types on animals.  相似文献   

9.
The evolutionary origins of music are much debated. One theory holds that the ability to produce complex musical sounds might reflect qualities that are relevant in mate choice contexts and hence, that music is functionally analogous to the sexually-selected acoustic displays of some animals. If so, women may be expected to show heightened preferences for more complex music when they are most fertile. Here, we used computer-generated musical pieces and ovulation predictor kits to test this hypothesis. Our results indicate that women prefer more complex music in general; however, we found no evidence that their preference for more complex music increased around ovulation. Consequently, our findings are not consistent with the hypothesis that a heightened preference/bias in women for more complex music around ovulation could have played a role in the evolution of music. We go on to suggest future studies that could further investigate whether sexual selection played a role in the evolution of this universal aspect of human culture.  相似文献   

10.
Inspired by a theory of embodied music cognition, we investigate whether music can entrain the speed of beat synchronized walking. If human walking is in synchrony with the beat and all musical stimuli have the same duration and the same tempo, then differences in walking speed can only be the result of music-induced differences in stride length, thus reflecting the vigor or physical strength of the movement. Participants walked in an open field in synchrony with the beat of 52 different musical stimuli all having a tempo of 130 beats per minute and a meter of 4 beats. The walking speed was measured as the walked distance during a time interval of 30 seconds. The results reveal that some music is ‘activating’ in the sense that it increases the speed, and some music is ‘relaxing’ in the sense that it decreases the speed, compared to the spontaneous walked speed in response to metronome stimuli. Participants are consistent in their observation of qualitative differences between the relaxing and activating musical stimuli. Using regression analysis, it was possible to set up a predictive model using only four sonic features that explain 60% of the variance. The sonic features capture variation in loudness and pitch patterns at periods of three, four and six beats, suggesting that expressive patterns in music are responsible for the effect. The mechanism may be attributed to an attentional shift, a subliminal audio-motor entrainment mechanism, or an arousal effect, but further study is needed to figure this out. Overall, the study supports the hypothesis that recurrent patterns of fluctuation affecting the binary meter strength of the music may entrain the vigor of the movement. The study opens up new perspectives for understanding the relationship between entrainment and expressiveness, with the possibility to develop applications that can be used in domains such as sports and physical rehabilitation.  相似文献   

11.
Many studies have revealed the influences of music, and particularly its tempo, on the autonomic nervous system (ANS) and respiration patterns. Since there is the interaction between the ANS and the respiratory system, namely sympatho-respiratory coupling, it is possible that the effect of musical tempo on the ANS is modulated by the respiratory system. Therefore, we investigated the effects of the relationship between musical tempo and respiratory rate on the ANS. Fifty-two healthy people aged 18–35 years participated in this study. Their respiratory rates were controlled by using a silent electronic metronome and they listened to simple drum sounds with a constant tempo. We varied the respiratory rate—acoustic tempo combination. The respiratory rate was controlled at 15 or 20 cycles per minute (CPM) and the acoustic tempo was 60 or 80 beats per minute (BPM) or the environment was silent. Electrocardiograms and an elastic chest band were used to measure the heart rate and respiratory rate, respectively. The mean heart rate and heart rate variability (HRV) were regarded as indices of ANS activity. We observed a significant increase in the mean heart rate and the low (0.04–0.15 Hz) to high (0.15–0.40 Hz) frequency ratio of HRV, only when the respiratory rate was controlled at 20 CPM and the acoustic tempo was 80 BPM. We suggest that the effect of acoustic tempo on the sympathetic tone is modulated by the respiratory system.  相似文献   

12.
Evidence regarding visually guided limb movements suggests that the motor system learns and maintains neural maps between motor commands and sensory feedback. Such systems are hypothesized to be used in a feed-forward control strategy that permits precision and stability without the delays of direct feedback control. Human vocalizations involve precise control over vocal and respiratory muscles. However, little is known about the sensorimotor representations underlying speech production. Here, we manipulated the heard fundamental frequency of the voice during speech to demonstrate learning of auditory-motor maps. Mandarin speakers repeatedly produced words with specific pitch patterns (tone categories). On each successive utterance, the frequency of their auditory feedback was increased by 1/100 of a semitone until they heard their feedback one full semitone above their true pitch. Subjects automatically compensated for these changes by lowering their vocal pitch. When feedback was unexpectedly returned to normal, speakers significantly increased the pitch of their productions beyond their initial baseline frequency. This adaptation was found to generalize to the production of another tone category. However, results indicate that a more robust adaptation was produced for the tone that was spoken during feedback alteration. The immediate aftereffects suggest a global remapping of the auditory-motor relationship after an extremely brief training period. However, this learning does not represent a complete transformation of the mapping; rather, it is in part target dependent.  相似文献   

13.
Musical behaviours such as dancing, singing and music production, which require the ability to entrain to a rhythmic beat, encourage high levels of interpersonal coordination. Such coordination has been associated with increased group cohesion and social bonding between group members. Previously, we demonstrated that this association influences even the social behaviour of 14-month-old infants. Infants were significantly more likely to display helpfulness towards an adult experimenter following synchronous bouncing compared with asynchronous bouncing to music. The present experiment was designed to determine whether interpersonal synchrony acts as a cue for 14-month-olds to direct their prosocial behaviours to specific individuals with whom they have experienced synchronous movement, or whether it acts as a social prime, increasing prosocial behaviour in general. Consistent with the previous results, infants were significantly more likely to help an experimenter following synchronous versus asynchronous movement with this person. Furthermore, this manipulation did not affect infant''s behaviour towards a neutral stranger, who was not involved in any movement experience. This indicates that synchronous bouncing acts as a social cue for directing prosociality. These results have implications for how musical engagement and rhythmic synchrony affect social behaviour very early in development.  相似文献   

14.

Background

There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.

Methodology/Principal Findings

This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.

Conclusions/Significance

These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.  相似文献   

15.
Stewart L  Walsh V 《Current biology : CB》2005,15(21):R882-R884
When it comes to listening to music, infants literally have a more open mind than their parents. Studies which investigate listening behaviour of babies and adults have shown that, as we learn to discriminate the musical sounds in our own environment, we become less sensitive to those of other cultures.  相似文献   

16.
The influence of tonal modulation in pieces of music on the EEG parameters was studied. An EEG was recorded while subjects were listening to two series of fragments with modulations: controlled harmonic progressions and the fragments of classical musical compositions. Each series included modulations to the subdominant, the dominant, and the ascending minor sixth. The highly controlled and artistically impoverished harmonic progressions of the first series contrasted with the real music excerpts in the second series, which differed in tempo, rhythm, tessitura, duration, and style. Listening to harmonic progressions and musical fragments produced event-related synchronization in the α frequency band. Real musical fragments with modulation to the dominant generated lower synchronization in the α band as compared with other modulations. A lower decrease of synchronization in the α frequency band after listening was observed in the case of fragments of classical music compared with harmonic progressions.  相似文献   

17.
《遗传学报》2022,49(1):40-53
The developing human and mouse teeth constitute an ideal model system to study the regulatory mechanism underlying organ growth control since their teeth share highly conserved and well-characterized developmental processes, and their developmental tempo varies notably. In the current study, we manipulated heterogenous recombination between human and mouse dental tissues and demonstrated that the dental mesenchyme dominates the tooth developmental tempo and FGF8 could be a critical player during this developmental process. Forced activation of FGF8 signaling in the dental mesenchyme of mice promoted cell proliferation, prevented cell apoptosis via p38 and perhaps PI3K-Akt intracellular signaling, and impelled the transition of the cell cycle from G1- to S-phase in the tooth germ, resulting in the slowdown of the tooth developmental pace. Our results provide compelling evidence that extrinsic signals can profoundly affect tooth developmental tempo, and the dental mesenchymal FGF8 could be a pivotal factor in controlling the developmental pace in a non-cell-autonomous manner during mammalian odontogenesis.  相似文献   

18.
Relationship of skin temperature changes to the emotions accompanying music   总被引:1,自引:0,他引:1  
One hundred introductory psychology students were given tasks that caused their skin temperatures to either fall or rise. Then they listened to two musical selections, one of which they rated as evoking arousing, negative emotions while the other was rated as evoking calm, positive emotions. During the first musical selection that was presented, the arousing, negative emotion music terminated skin temperature increases and perpetuated skin temperature decreases, whereas the calm, positive emotion selection terminated skin temperature decreases and perpetuated skin temperature increases. During the second musical selection, skin temperature tended to increase whichever music was played; however, the increases were significant only during the calm, positive emotion music. It was concluded that music initially affects skin temperature in ways that can be predicted from affective rating scales, although the effect of some selections may depend upon what, if any, music had been previously heard.  相似文献   

19.
Musical behaviours are universal across human populations and, at the same time, highly diverse in their structures, roles and cultural interpretations. Although laboratory studies of isolated listeners and music-makers have yielded important insights into sensorimotor and cognitive skills and their neural underpinnings, they have revealed little about the broader significance of music for individuals, peer groups and communities. This review presents a sampling of musical forms and coordinated musical activity across cultures, with the aim of highlighting key similarities and differences. The focus is on scholarly and everyday ideas about music—what it is and where it originates—as well the antiquity of music and the contribution of musical behaviour to ritual activity, social organization, caregiving and group cohesion. Synchronous arousal, action synchrony and imitative behaviours are among the means by which music facilitates social bonding. The commonalities and differences in musical forms and functions across cultures suggest new directions for ethnomusicology, music cognition and neuroscience, and a pivot away from the predominant scientific focus on instrumental music in the Western European tradition.  相似文献   

20.
Convergent evidence demonstrates that adult humans possess numerical representations that are independent of language [1, 2, 3, 4, 5 and 6]. Human infants and nonhuman animals can also make purely numerical discriminations, implicating both developmental and evolutionary bases for adult humans' language-independent representations of number [7 and 8]. Recent evidence suggests that the nonverbal representations of number held by human adults are not constrained by the sensory modality in which they were perceived [9]. Previous studies, however, have yielded conflicting results concerning whether the number representations held by nonhuman animals and human infants are tied to the modality in which they were established [10, 11, 12, 13, 14 and 15]. Here, we report that untrained monkeys preferentially looked at a dynamic video display depicting the number of conspecifics that matched the number of vocalizations they heard. These findings suggest that number representations held by monkeys, like those held by adult humans, are unfettered by stimulus modality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号