首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Musical imagery is a relatively unexplored area, partly because of deficiencies in existing experimental paradigms, which are often difficult, unreliable, or do not provide objective measures of performance. Here we describe a novel protocol, the Pitch Imagery Arrow Task (PIAT), which induces and trains pitch imagery in both musicians and non-musicians. Given a tonal context and an initial pitch sequence, arrows are displayed to elicit a scale-step sequence of imagined pitches, and participants indicate whether the final imagined tone matches an audible probe. It is a staircase design that accommodates individual differences in musical experience and imagery ability. This new protocol was used to investigate the roles that musical expertise, self-reported auditory vividness and mental control play in imagery performance. Performance on the task was significantly better for participants who employed a musical imagery strategy compared to participants who used an alternative cognitive strategy and positively correlated with scores on the Control subscale from the Bucknell Auditory Imagery Scale (BAIS). Multiple regression analysis revealed that Imagery performance accuracy was best predicted by a combination of strategy use and scores on the Vividness subscale of BAIS. These results confirm that competent performance on the PIAT requires active musical imagery and is very difficult to achieve using alternative cognitive strategies. Auditory vividness and mental control were more important than musical experience in the ability to perform manipulation of pitch imagery.  相似文献   

2.
The development of musical skills by musicians results in specific structural and functional modifications in the brain. Surprisingly, no functional magnetic resonance imaging (fMRI) study has investigated the impact of musical training on brain function during long-term memory retrieval, a faculty particularly important in music. Thus, using fMRI, we examined for the first time this process during a musical familiarity task (i.e., semantic memory for music). Musical expertise induced supplementary activations in the hippocampus, medial frontal gyrus, and superior temporal areas on both sides, suggesting a constant interaction between episodic and semantic memory during this task in musicians. In addition, a voxel-based morphometry (VBM) investigation was performed within these areas and revealed that gray matter density of the hippocampus was higher in musicians than in nonmusicians. Our data indicate that musical expertise critically modifies long-term memory processes and induces structural and functional plasticity in the hippocampus.  相似文献   

3.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.  相似文献   

4.
Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing''s well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.  相似文献   

5.
The perception of a regular beat is fundamental to music processing. Here we examine whether the detection of a regular beat is pre-attentive for metrically simple, acoustically varying stimuli using the mismatch negativity (MMN), an ERP response elicited by violations of acoustic regularity irrespective of whether subjects are attending to the stimuli. Both musicians and non-musicians were presented with a varying rhythm with a clear accent structure in which occasionally a sound was omitted. We compared the MMN response to the omission of identical sounds in different metrical positions. Most importantly, we found that omissions in strong metrical positions, on the beat, elicited higher amplitude MMN responses than omissions in weak metrical positions, not on the beat. This suggests that the detection of a beat is pre-attentive when highly beat inducing stimuli are used. No effects of musical expertise were found. Our results suggest that for metrically simple rhythms with clear accents beat processing does not require attention or musical expertise. In addition, we discuss how the use of acoustically varying stimuli may influence ERP results when studying beat processing.  相似文献   

6.
Musical expertise is associated with structural and functional changes in the brain that underlie facilitated auditory perception. We investigated whether the phase locking (PL) and amplitude modulations (AM) of neuronal oscillations in response to musical chords are correlated with musical expertise and whether they reflect the prototypicality of chords in Western tonal music. To this aim, we recorded magnetoencephalography (MEG) while musicians and non-musicians were presented with common prototypical major and minor chords, and with uncommon, non-prototypical dissonant and mistuned chords, while watching a silenced movie. We then analyzed the PL and AM of ongoing oscillations in the theta (4–8 Hz) alpha (8–14 Hz), beta- (14–30 Hz) and gamma- (30–80 Hz) bands to these chords. We found that musical expertise was associated with strengthened PL of ongoing oscillations to chords over a wide frequency range during the first 300 ms from stimulus onset, as opposed to increased alpha-band AM to chords over temporal MEG channels. In musicians, the gamma-band PL was strongest to non-prototypical compared to other chords, while in non-musicians PL was strongest to minor chords. In both musicians and non-musicians the long-latency (> 200 ms) gamma-band PL was also sensitive to chord identity, and particularly to the amplitude modulations (beats) of the dissonant chord. These findings suggest that musical expertise modulates oscillation PL to musical chords and that the strength of these modulations is dependent on chord prototypicality.  相似文献   

7.
Musical training leads to sensory and motor neuroplastic changes in the human brain. Motivated by findings on enlarged corpus callosum in musicians and asymmetric somatomotor representation in string players, we investigated the relationship between musical training, callosal anatomy, and interhemispheric functional symmetry during music listening. Functional symmetry was increased in musicians compared to nonmusicians, and in keyboardists compared to string players. This increased functional symmetry was prominent in visual and motor brain networks. Callosal size did not significantly differ between groups except for the posterior callosum in musicians compared to nonmusicians. We conclude that the distinctive postural and kinematic symmetry in instrument playing cross-modally shapes information processing in sensory-motor cortical areas during music listening. This cross-modal plasticity suggests that motor training affects music perception.  相似文献   

8.
Using magnetoencephalography (MEG), we investigated the influence of long term musical training on the processing of partly imagined tone patterns (imagery condition) compared to the same perceived patterns (perceptual condition). The magnetic counterpart of the mismatch negativity (MMNm) was recorded and compared between musicians and non-musicians in order to assess the effect of musical training on the detection of deviants to tone patterns. The results indicated a clear MMNm in the perceptual condition as well as in a simple pitch oddball (control) condition in both groups. However, there was no significant mismatch response in either group in the imagery condition despite above chance behavioral performance in the task of detecting deviant tones. The latency and the laterality of the MMNm in the perceptual condition differed significantly between groups, with an earlier MMNm in musicians, especially in the left hemisphere. In contrast the MMNm amplitudes did not differ significantly between groups. The behavioral results revealed a clear effect of long-term musical training in both experimental conditions. The obtained results represent new evidence that the processing of tone patterns is faster and more strongly lateralized in musically trained subjects, which is consistent with other findings in different paradigms of enhanced auditory neural system functioning due to long-term musical training.  相似文献   

9.
The simplest and likeliest assumption concerning the cognitive bases of absolute pitch (AP) is that at its origin there is a particularly skilled function which matches the height of the perceived pitch to the verbal label of the musical tone. Since there is no difference in sound frequency resolution between AP and non-AP (NAP) musicians, the hypothesis of the present study is that the failure of NAP musicians in pitch identification relies mainly in an inability to retrieve the correct verbal label to be assigned to the perceived musical note. The primary hypothesis is that, when asked to identify tones, NAP musicians confuse the verbal labels to be attached to the stimulus on the basis of their phonetic content. Data from two AP tests are reported, in which subjects had to respond in the presence or in the absence of visually presented verbal note labels (fixed Do solmization). Results show that NAP musicians confuse more frequently notes having a similar vowel in the note label. They tend to confuse e.g. a 261 Hz tone (Do) more often with Sol than, e.g., with La. As a second goal, we wondered whether this effect is lateralized, i.e. whether one hemisphere is more responsible than the other in the confusion of notes with similar labels. This question was addressed by observing pitch identification during dichotic listening. Results showed that there is a right hemispheric disadvantage, in NAP but not AP musicians, in the retrieval of the verbal label to be assigned to the perceived pitch. The present results indicate that absolute pitch has strong verbal bases, at least from a cognitive point of view.  相似文献   

10.
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general ‘mirror-neuron’ circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.  相似文献   

11.
Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.  相似文献   

12.
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.  相似文献   

13.
Perception of complex sound is a process carried out in everyday life situations and contributes in the way one perceives reality. Attempting to explain sound perception and how it affects human beings is complicated. Physics of simple sound can be described as a function of frequency, amplitude and phase. Psychology of sound, also termed psychoacoustics, has its own distinct elements of pitch, intensity and tibre. An interconnection exists between physics and psychology of hearing.Music being a complex sound contributes to communication and conveys information with semantic and emotional elements. These elements indicate the involvement of the central nervous system through processes of integration and interpretation together with peripheral auditory processing.Effects of sound and music in human psychology and physiology are complicated. Psychological influences of listening to different types of music are based on the different characteristics of basic musical sounds. Attempting to explain music perception can be simpler if music is broken down to its basic auditory signals. Perception of auditory signals is analyzed by the science of psychoacoustics. Differences in complex sound perception have been found between normal subjects and psychiatric patients and between different types of psychopathologies.  相似文献   

14.
One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.  相似文献   

15.
Absolute pitch (AP) is the ability to recognize a pitch, without an external reference. By surveying more than 600 musicians in music conservatories, training programs, and orchestras, we have attempted to dissect the influences of early musical training and genetics on the development of this ability. Early musical training appears to be necessary but not sufficient for the development of AP. Forty percent of musicians who had begun training at <=4 years of age reported AP, whereas only 3% of those who had initiated training at >=9 years of age did so. Self-reported AP possessors were four times more likely to report another AP possessor in their families than were non-AP possessors. These data suggest that both early musical training and genetic predisposition are needed for the development of AP. We developed a simple computer-based acoustical test that has allowed us to subdivide AP possessors into distinct groups, on the basis of their performance. Investigation of individuals who performed extremely well on this test has already led us to identify several families that will be suitable for studies of the genetic basis of AP.  相似文献   

16.
The diagnosis of tinnitus relies on self-report. Psychoacoustic measurements of tinnitus pitch and loudness are essential for assessing claims and discriminating true from false ones. For this reason, the quantification of tinnitus remains a challenging research goal. We aimed to: (1) assess the precision of a new tinnitus likeness rating procedure with a continuous-pitch presentation method, controlling for music training, and (2) test whether tinnitus psychoacoustic measurements have the sensitivity and specificity required to detect people faking tinnitus. Musicians and non-musicians with tinnitus, as well as simulated malingerers without tinnitus, were tested. Most were retested several weeks later. Tinnitus pitch matching was first assessed using the likeness rating method: pure tones from 0.25 to 16 kHz were presented randomly to participants, who had to rate the likeness of each tone to their tinnitus, and to adjust its level from 0 to 100 dB SPL. Tinnitus pitch matching was then assessed with a continuous-pitch method: participants had to match the pitch of their tinnitus to an external tone by moving their finger across a touch-sensitive strip, which generated a continuous pure tone from 0.5 to 20 kHz in 1-Hz steps. The predominant tinnitus pitch was consistent across both methods for both musicians and non-musicians, although musicians displayed better external tone pitch matching abilities. Simulated malingerers rated loudness much higher than did the other groups with a high degree of specificity (94.4%) and were unreliable in loudness (not pitch) matching from one session to the other. Retest data showed similar pitch matching responses for both methods for all participants. In conclusion, tinnitus pitch and loudness reliably correspond to the tinnitus percept, and psychoacoustic loudness matches are sensitive and specific to the presence of tinnitus.  相似文献   

17.
The influence of tonal modulation in pieces of music on the EEG parameters was studied. An EEG was recorded while subjects were listening to two series of fragments with modulations: controlled harmonic progressions and the fragments of classical musical compositions. Each series included modulations to the subdominant, the dominant, and the ascending minor sixth. The highly controlled and artistically impoverished harmonic progressions of the first series contrasted with the real music excerpts in the second series, which differed in tempo, rhythm, tessitura, duration, and style. Listening to harmonic progressions and musical fragments produced event-related synchronization in the α frequency band. Real musical fragments with modulation to the dominant generated lower synchronization in the α band as compared with other modulations. A lower decrease of synchronization in the α frequency band after listening was observed in the case of fragments of classical music compared with harmonic progressions.  相似文献   

18.
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.  相似文献   

19.
Musical competence may confer cognitive advantages that extend beyond processing of familiar musical sounds. Behavioural evidence indicates a general enhancement of both working memory and attention in musicians. It is possible that musicians, due to their training, are better able to maintain focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD) activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally enhanced cognition. All participants easily distinguished the stimuli. We tested the hypothesis that musicians nonetheless would perform better, and that differential brain activity would mainly be present in cortical areas involved in cognitive control such as the lateral prefrontal cortex. The musicians performed better as reflected in reaction times and error rates. Musicians also had larger BOLD responses than non-musicians in neuronal networks that sustain attention and cognitive control, including regions of the lateral prefrontal cortex, lateral parietal cortex, insula, and putamen in the right hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task. The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may be a consequence of focused musical training.  相似文献   

20.
Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号