首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Music has a pervasive tendency to rhythmically engage our body. In contrast, synchronization with speech is rare. Music’s superiority over speech in driving movement probably results from isochrony of musical beats, as opposed to irregular speech stresses. Moreover, the presence of regular patterns of embedded periodicities (i.e., meter) may be critical in making music particularly conducive to movement. We investigated these possibilities by asking participants to synchronize with isochronous auditory stimuli (target), while music and speech distractors were presented at one of various phase relationships with respect to the target. In Exp. 1, familiar musical excerpts and fragments of children poetry were used as distractors. The stimuli were manipulated in terms of beat/stress isochrony and average pitch to achieve maximum comparability. In Exp. 2, the distractors were well-known songs performed with lyrics, on a reiterated syllable, and spoken lyrics, all having the same meter. Music perturbed synchronization with the target stimuli more than speech fragments. However, music superiority over speech disappeared when distractors shared isochrony and the same meter. Music’s peculiar and regular temporal structure is likely to be the main factor fostering tight coupling between sound and movement.  相似文献   

2.
Inspired by a theory of embodied music cognition, we investigate whether music can entrain the speed of beat synchronized walking. If human walking is in synchrony with the beat and all musical stimuli have the same duration and the same tempo, then differences in walking speed can only be the result of music-induced differences in stride length, thus reflecting the vigor or physical strength of the movement. Participants walked in an open field in synchrony with the beat of 52 different musical stimuli all having a tempo of 130 beats per minute and a meter of 4 beats. The walking speed was measured as the walked distance during a time interval of 30 seconds. The results reveal that some music is ‘activating’ in the sense that it increases the speed, and some music is ‘relaxing’ in the sense that it decreases the speed, compared to the spontaneous walked speed in response to metronome stimuli. Participants are consistent in their observation of qualitative differences between the relaxing and activating musical stimuli. Using regression analysis, it was possible to set up a predictive model using only four sonic features that explain 60% of the variance. The sonic features capture variation in loudness and pitch patterns at periods of three, four and six beats, suggesting that expressive patterns in music are responsible for the effect. The mechanism may be attributed to an attentional shift, a subliminal audio-motor entrainment mechanism, or an arousal effect, but further study is needed to figure this out. Overall, the study supports the hypothesis that recurrent patterns of fluctuation affecting the binary meter strength of the music may entrain the vigor of the movement. The study opens up new perspectives for understanding the relationship between entrainment and expressiveness, with the possibility to develop applications that can be used in domains such as sports and physical rehabilitation.  相似文献   

3.
We live in a dynamic and changing environment, which necessitates that we adapt to and efficiently respond to changes of stimulus form (‘what’) and stimulus occurrence (‘when’). Consequently, behaviour is optimal when we can anticipate both the ‘what’ and ‘when’ dimensions of a stimulus. For example, to perceive a temporally expected stimulus, a listener needs to establish a fairly precise internal representation of its external temporal structure, a function ascribed to classical sensorimotor areas such as the cerebellum. Here we investigated how patients with cerebellar lesions and healthy matched controls exploit temporal regularity during auditory deviance processing. We expected modulations of the N2b and P3b components of the event-related potential in response to deviant tones, and also a stronger P3b response when deviant tones are embedded in temporally regular compared to irregular tone sequences. We further tested to what degree structural damage to the cerebellar temporal processing system affects the N2b and P3b responses associated with voluntary attention to change detection and the predictive adaptation of a mental model of the environment, respectively. Results revealed that healthy controls and cerebellar patients display an increased N2b response to deviant tones independent of temporal context. However, while healthy controls showed the expected enhanced P3b response to deviant tones in temporally regular sequences, the P3b response in cerebellar patients was significantly smaller in these sequences. The current data provide evidence that structural damage to the cerebellum affects the predictive adaptation to the temporal structure of events and the updating of a mental model of the environment under voluntary attention.  相似文献   

4.
We investigated how the audience member’s physiological reactions differ as a function of listening context (i.e., live versus recorded music contexts). Thirty-seven audience members were assigned to one of seven pianists’ performances and listened to his/her live performances of six pieces (fast and slow pieces by Bach, Schumann, and Debussy). Approximately 10 weeks after the live performance, each of the audience members returned to the same room and listened to the recorded performances of the same pianists’ via speakers. We recorded the audience members’ electrocardiograms in listening to the performances in both conditions, and analyzed their heart rates and the spectral features of the heart-rate variability (i.e., HF/TF, LF/HF). Results showed that the audience’s heart rate was higher for the faster than the slower piece only in the live condition. As compared with the recorded condition, the audience’s sympathovagal balance (LF/HF) was less while their vagal nervous system (HF/TF) was activated more in the live condition, which appears to suggest that sharing the ongoing musical moments with the pianist reduces the audience’s physiological stress. The results are discussed in terms of the audience’s superior attention and temporal entrainment to live performance.  相似文献   

5.
We investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in ‘cut’). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12–11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12–24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech.  相似文献   

6.
Temporal predictability is thought to affect stimulus processing by facilitating the allocation of attentional resources. Recent studies have shown that periodicity of a tonal sequence results in a decreased peak latency and a larger amplitude of the P3b compared with temporally random, i.e., aperiodic sequences. We investigated whether this applies also to sequences of linguistic stimuli (syllables), although speech is usually aperiodic. We compared aperiodic syllable sequences with two temporally regular conditions. In one condition, the interval between syllable onset was fixed, whereas in a second condition the interval between the syllables’ perceptual center (p-center) was kept constant. Event-related potentials were assessed in 30 adults who were instructed to detect irregularities in the stimulus sequences. We found larger P3b amplitudes for both temporally predictable conditions as compared to the aperiodic condition and a shorter P3b latency in the p-center condition than in both other conditions. These findings demonstrate that even in acoustically more complex sequences such as syllable streams, temporal predictability facilitates the processing of deviant stimuli. Furthermore, we provide first electrophysiological evidence for the relevance of the p-center concept in linguistic stimulus processing.  相似文献   

7.
The perceived emotional value of stimuli and, as a consequence the subjective emotional experience with them, can be affected by context-dependent styles of processing. Therefore, the investigation of the neural correlates of emotional experience requires accounting for such a variable, a matter of an experimental challenge. Closing the eyes affects the style of attending to auditory stimuli by modifying the perceptual relationship with the environment without changing the stimulus itself. In the current study, we used fMRI to characterize the neural mediators of such modification on the experience of emotionality in music. We assumed that closed eyes position will reveal interplay between different levels of neural processing of emotions. More specifically, we focused on the amygdala as a central node of the limbic system and on its co-activation with the Locus Ceruleus (LC) and Ventral Prefrontal Cortex (VPFC); regions involved in processing of, respectively, ‘low’, visceral-, and ‘high’, cognitive-related, values of emotional stimuli. Fifteen healthy subjects listened to negative and neutral music excerpts with eyes closed or open. As expected, behavioral results showed that closing the eyes while listening to emotional music resulted in enhanced rating of emotionality, specifically of negative music. In correspondence, fMRI results showed greater activation in the amygdala when subjects listened to the emotional music with eyes closed relative to eyes open. More so, by using voxel-based correlation and a dynamic causal model analyses we demonstrated that increased amygdala activation to negative music with eyes closed led to increased activations in the LC and VPFC. This finding supports a system-based model of perceived emotionality in which the amygdala has a central role in mediating the effect of context-based processing style by recruiting neural operations involved in both visceral (i.e. ‘low’) and cognitive (i.e. ‘high’) related processes of emotions.  相似文献   

8.
The increasing number of casting shows and talent contests in the media over the past years suggests a public interest in rating the quality of vocal performances. In many of these formats, laymen alongside music experts act as judges. Whereas experts'' judgments are considered objective and reliable when it comes to evaluating singing voice, little is known about laymen’s ability to evaluate peers. On the one hand, layman listeners–who by definition did not have any formal training or regular musical practice–are known to have internalized the musical rules on which singing accuracy is based. On the other hand, layman listeners’ judgment of their own vocal skills is highly inaccurate. Also, when compared with that of music experts, their level of competence in pitch perception has proven limited. The present study investigates laypersons'' ability to objectively evaluate melodies performed by untrained singers. For this purpose, laymen listeners were asked to judge sung melodies. The results were compared with those of music experts who had performed the same task in a previous study. Interestingly, the findings show a high objectivity and reliability in layman listeners. Whereas both the laymen''s and experts'' definition of pitch accuracy overlap, differences regarding the musical criteria employed in the rating task were evident. The findings suggest that the effect of expertise is circumscribed and limited and supports the view that laypersons make trustworthy judges when evaluating the pitch accuracy of untrained singers.  相似文献   

9.
Note onsets in music are acoustic landmarks providing auditory cues that underlie the perception of more complex phenomena such as beat, rhythm, and meter. For naturalistic ongoing sounds a detailed view on the neural representation of onset structure is hard to obtain, since, typically, stimulus-related EEG signatures are derived by averaging a high number of identical stimulus presentations. Here, we propose a novel multivariate regression-based method extracting onset-related brain responses from the ongoing EEG. We analyse EEG recordings of nine subjects who passively listened to stimuli from various sound categories encompassing simple tone sequences, full-length romantic piano pieces and natural (non-music) soundscapes. The regression approach reduces the 61-channel EEG to one time course optimally reflecting note onsets. The neural signatures derived by this procedure indeed resemble canonical onset-related ERPs, such as the N1-P2 complex. This EEG projection was then utilized to determine the Cortico-Acoustic Correlation (CACor), a measure of synchronization between EEG signal and stimulus. We demonstrate that a significant CACor (i) can be detected in an individual listener''s EEG of a single presentation of a full-length complex naturalistic music stimulus, and (ii) it co-varies with the stimuli’s average magnitudes of sharpness, spectral centroid, and rhythmic complexity. In particular, the subset of stimuli eliciting a strong CACor also produces strongly coordinated tension ratings obtained from an independent listener group in a separate behavioral experiment. Thus musical features that lead to a marked physiological reflection of tone onsets also contribute to perceived tension in music.  相似文献   

10.
Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT’s at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy’s “Gangnam Style” in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied familiarity with dance choreography may facilitate meter awareness. Results shed light on the processing of multimedia environments.  相似文献   

11.
A meaningful set of stimuli, such as a sequence of frames from a movie, triggers a set of different experiences. By contrast, a meaningless set of stimuli, such as a sequence of ‘TV noise’ frames, triggers always the same experience—of seeing ‘TV noise’—even though the stimuli themselves are as different from each other as the movie frames. We reasoned that the differentiation of cortical responses underlying the subject’s experiences, as measured by Lempel-Ziv complexity (incompressibility) of functional MRI images, should reflect the overall meaningfulness of a set of stimuli for the subject, rather than differences among the stimuli. We tested this hypothesis by quantifying the differentiation of brain activity patterns in response to a movie sequence, to the same movie scrambled in time, and to ‘TV noise’, where the pixels from each movie frame were scrambled in space. While overall cortical activation was strong and widespread in all conditions, the differentiation (Lempel-Ziv complexity) of brain activation patterns was correlated with the meaningfulness of the stimulus set, being highest in the movie condition, intermediate in the scrambled movie condition, and minimal for ‘TV noise’. Stimulus set meaningfulness was also associated with higher information integration among cortical regions. These results suggest that the differentiation of neural responses can be used to assess the meaningfulness of a given set of stimuli for a given subject, without the need to identify the features and categories that are relevant to the subject, nor the precise location of selective neural responses.  相似文献   

12.
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory–motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation.  相似文献   

13.

Background

There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.

Methodology/Principal Findings

This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.

Conclusions/Significance

These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.  相似文献   

14.
Humans possess an ability to perceive and synchronize movements to the beat in music (‘beat perception and synchronization’), and recent neuroscientific data have offered new insights into this beat-finding capacity at multiple neural levels. Here, we review and compare behavioural and neural data on temporal and sequential processing during beat perception and entrainment tasks in macaques (including direct neural recording and local field potential (LFP)) and humans (including fMRI, EEG and MEG). These abilities rest upon a distributed set of circuits that include the motor cortico-basal-ganglia–thalamo-cortical (mCBGT) circuit, where the supplementary motor cortex (SMA) and the putamen are critical cortical and subcortical nodes, respectively. In addition, a cortical loop between motor and auditory areas, connected through delta and beta oscillatory activity, is deeply involved in these behaviours, with motor regions providing the predictive timing needed for the perception of, and entrainment to, musical rhythms. The neural discharge rate and the LFP oscillatory activity in the gamma- and beta-bands in the putamen and SMA of monkeys are tuned to the duration of intervals produced during a beat synchronization–continuation task (SCT). Hence, the tempo during beat synchronization is represented by different interval-tuned cells that are activated depending on the produced interval. In addition, cells in these areas are tuned to the serial-order elements of the SCT. Thus, the underpinnings of beat synchronization are intrinsically linked to the dynamics of cell populations tuned for duration and serial order throughout the mCBGT. We suggest that a cross-species comparison of behaviours and the neural circuits supporting them sets the stage for a new generation of neurally grounded computational models for beat perception and synchronization.  相似文献   

15.
Empathy covers a wide range of phenomena varying according to the degree of cognitive complexity involved; ranging from emotional contagion, defined as the sharing of others’ emotional states, to sympathetic concern requiring animals to have an appraisal of the others’ situation and showing concern-like behaviors. While most studies have investigated how animals reacted in response to conspecifics’ distress, dogs so far have mainly been targeted to examine cross-species empathic responses. To investigate whether dogs would respond with empathy-like behavior also to conspecifics, we adopted a playback method using conspecifics’ vocalizations (whines) recorded during a distressful event as well as control sounds. Our subjects were first exposed to a playback phase where they were subjected either to a control sound, a familiar whine (from their familiar partner) or a stranger whine stimulus (from a stranger dog), and then a reunion phase where the familiar partner entered the room. When exposed to whines, dogs showed a higher behavioral alertness and exhibited more stress-related behaviors compared to when exposed to acoustically similar control sounds. Moreover, they demonstrated more comfort-offering behaviors toward their familiar partners following whine playbacks than after control stimuli. Furthermore, when looking at the first session, this comfort offering was biased towards the familiar partner when subjects were previously exposed to the familiar compared to the stranger whines. Finally, familiar whine stimuli tended to maintain higher cortisol levels while stranger whines did not. To our knowledge, these results are the first to suggest that dogs can experience and demonstrate “empathic-like” responses to conspecifics’ distress-calls.  相似文献   

16.
Variations in the temporal structure of an interval can lead to remarkable differences in perceived duration. For example, it has previously been shown that isochronous intervals, that is, intervals filled with temporally regular stimuli, are perceived to last longer than intervals left empty or filled with randomly timed stimuli. Characterizing the extent of such distortions is crucial to understanding how duration perception works. One account to explain effects of temporal structure is a non-linear accumulator-counter mechanism reset at the beginning of every subinterval. An alternative explanation based on entrainment to regular stimulation posits that the neural response to each filler stimulus in an isochronous sequence is amplified and a higher neural response may lead to an overestimation of duration. If entrainment is the key that generates response amplification and the distortions in perceived duration, then any form of predictability in the temporal structure of interval fillers should lead to the perception of an interval that lasts longer than a randomly filled one. The present experiments confirm that intervals filled with fully predictable rhythmically grouped stimuli lead to longer perceived duration than anisochronous intervals. No general over- or underestimation is registered for rhythmically grouped compared to isochronous intervals. However, we find that the number of stimuli in each group composing the rhythm also influences perceived duration. Implications of these findings for a non-linear clock model as well as a neural response magnitude account of perceived duration are discussed.  相似文献   

17.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.  相似文献   

18.
It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates. This observation supports the vocal learning hypothesis that suggests rhythmic entrainment to be a by-product of the vocal learning mechanisms that are shared by several bird and mammal species, including humans, but that are only weakly developed, or missing entirely, in nonhuman primates. To test this hypothesis we measured auditory event-related potentials (ERPs) in two rhesus monkeys (Macaca mulatta), probing a well-documented component in humans, the mismatch negativity (MMN) to study rhythmic expectation. We demonstrate for the first time in rhesus monkeys that, in response to infrequent deviants in pitch that were presented in a continuous sound stream using an oddball paradigm, a comparable ERP component can be detected with negative deflections in early latencies (Experiment 1). Subsequently we tested whether rhesus monkeys can detect gaps (omissions at random positions in the sound stream; Experiment 2) and, using more complex stimuli, also the beat (omissions at the first position of a musical unit, i.e. the ‘downbeat’; Experiment 3). In contrast to what has been shown in human adults and newborns (using identical stimuli and experimental paradigm), the results suggest that rhesus monkeys are not able to detect the beat in music. These findings are in support of the hypothesis that beat induction (the cognitive mechanism that supports the perception of a regular pulse from a varying rhythm) is species-specific and absent in nonhuman primates. In addition, the findings support the auditory timing dissociation hypothesis, with rhesus monkeys being sensitive to rhythmic grouping (detecting the start of a rhythmic group), but not to the induced beat (detecting a regularity from a varying rhythm).  相似文献   

19.
Performing music is a multimodal experience involving the visual, auditory, and somatosensory modalities as well as the motor system. Therefore, musical training is an excellent model to study multimodal brain plasticity. Indeed, we have previously shown that short-term piano practice increase the magnetoencephalographic (MEG) response to melodic material in novice players. Here we investigate the impact of piano training using a rhythmic-focused exercise on responses to rhythmic musical material. Musical training with non musicians was conducted over a period of two weeks. One group (sensorimotor-auditory, SA) learned to play a piano sequence with a distinct musical rhythm, another group (auditory, A) listened to, and evaluated the rhythmic accuracy of the performances of the SA-group. Training-induced cortical plasticity was evaluated using MEG, comparing the mismatch negativity (MMN) in response to occasional rhythmic deviants in a repeating rhythm pattern before and after training. The SA-group showed a significantly greater enlargement of MMN and P2 to deviants after training compared to the A- group. The training-induced increase of the rhythm MMN was bilaterally expressed in contrast to our previous finding where the MMN for deviants in the pitch domain showed a larger right than left increase. The results indicate that when auditory experience is strictly controlled during training, involvement of the sensorimotor system and perhaps increased attentional recources that are needed in producing rhythms lead to more robust plastic changes in the auditory cortex compared to when rhythms are simply attended to in the auditory domain in the absence of motor production.  相似文献   

20.
Tapping or clapping to an auditory beat, an easy task for most individuals, reveals precise temporal synchronization with auditory patterns such as music, even in the presence of temporal fluctuations. Most models of beat-tracking rely on the theoretical concept of pulse: a perceived regular beat generated by an internal oscillation that forms the foundation of entrainment abilities. Although tapping to the beat is a natural sensorimotor activity for most individuals, not everyone can track an auditory beat. Recently, the case of Mathieu was documented (Phillips-Silver et al. 2011 Neuropsychologia 49, 961–969. (doi:10.1016/j.neuropsychologia.2011.02.002)). Mathieu presented himself as having difficulty following a beat and exhibited synchronization failures. We examined beat-tracking in normal control participants, Mathieu, and a second beat-deaf individual, who tapped with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Both beat-deaf cases exhibited failures in error correction in response to the perturbation task while exhibiting normal spontaneous motor tempi (in the absence of an auditory stimulus), supporting a deficit specific to perception–action coupling. A damped harmonic oscillator model was applied to the temporal adaptation responses; the model''s parameters of relaxation time and endogenous frequency accounted for differences between the beat-deaf cases as well as the control group individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号