首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The perception of a regular beat is fundamental to music processing. Here we examine whether the detection of a regular beat is pre-attentive for metrically simple, acoustically varying stimuli using the mismatch negativity (MMN), an ERP response elicited by violations of acoustic regularity irrespective of whether subjects are attending to the stimuli. Both musicians and non-musicians were presented with a varying rhythm with a clear accent structure in which occasionally a sound was omitted. We compared the MMN response to the omission of identical sounds in different metrical positions. Most importantly, we found that omissions in strong metrical positions, on the beat, elicited higher amplitude MMN responses than omissions in weak metrical positions, not on the beat. This suggests that the detection of a beat is pre-attentive when highly beat inducing stimuli are used. No effects of musical expertise were found. Our results suggest that for metrically simple rhythms with clear accents beat processing does not require attention or musical expertise. In addition, we discuss how the use of acoustically varying stimuli may influence ERP results when studying beat processing.  相似文献   

2.
Long-range correlated temporal fluctuations in the beats of musical rhythms are an inevitable consequence of human action. According to recent studies, such fluctuations also lead to a favored listening experience. The scaling laws of amplitude variations in rhythms, however, are widely unknown. Here we use highly sensitive onset detection and time series analysis to study the amplitude and temporal fluctuations of Jeff Porcaro’s one-handed hi-hat pattern in “I Keep Forgettin’”—one of the most renowned 16th note patterns in modern drumming. We show that fluctuations of hi-hat amplitudes and interbeat intervals (times between hits) have clear long-range correlations and short-range anticorrelations separated by a characteristic time scale. In addition, we detect subtle features in Porcaro’s drumming such as small drifts in the 16th note pulse and non-trivial periodic two-bar patterns in both hi-hat amplitudes and intervals. Through this investigation we introduce a step towards statistical studies of the 20th and 21st century music recordings in the framework of complex systems. Our analysis has direct applications to the development of drum machines and to drumming pedagogy.  相似文献   

3.
Performing music is a multimodal experience involving the visual, auditory, and somatosensory modalities as well as the motor system. Therefore, musical training is an excellent model to study multimodal brain plasticity. Indeed, we have previously shown that short-term piano practice increase the magnetoencephalographic (MEG) response to melodic material in novice players. Here we investigate the impact of piano training using a rhythmic-focused exercise on responses to rhythmic musical material. Musical training with non musicians was conducted over a period of two weeks. One group (sensorimotor-auditory, SA) learned to play a piano sequence with a distinct musical rhythm, another group (auditory, A) listened to, and evaluated the rhythmic accuracy of the performances of the SA-group. Training-induced cortical plasticity was evaluated using MEG, comparing the mismatch negativity (MMN) in response to occasional rhythmic deviants in a repeating rhythm pattern before and after training. The SA-group showed a significantly greater enlargement of MMN and P2 to deviants after training compared to the A- group. The training-induced increase of the rhythm MMN was bilaterally expressed in contrast to our previous finding where the MMN for deviants in the pitch domain showed a larger right than left increase. The results indicate that when auditory experience is strictly controlled during training, involvement of the sensorimotor system and perhaps increased attentional recources that are needed in producing rhythms lead to more robust plastic changes in the auditory cortex compared to when rhythms are simply attended to in the auditory domain in the absence of motor production.  相似文献   

4.
Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.  相似文献   

5.
Rhythmic entrainment, or beat synchronization, provides an opportunity to understand how multiple systems operate together to integrate sensory-motor information. Also, synchronization is an essential component of musical performance that may be enhanced through musical training. Investigations of rhythmic entrainment have revealed a developmental trajectory across the lifespan, showing synchronization improves with age and musical experience. Here, we explore the development and maintenance of synchronization in childhood through older adulthood in a large cohort of participants (N = 145), and also ask how it may be altered by musical experience. We employed a uniform assessment of beat synchronization for all participants and compared performance developmentally and between individuals with and without musical experience. We show that the ability to consistently tap along to a beat improves with age into adulthood, yet in older adulthood tapping performance becomes more variable. Also, from childhood into young adulthood, individuals are able to tap increasingly close to the beat (i.e., asynchronies decline with age), however, this trend reverses from younger into older adulthood. There is a positive association between proportion of life spent playing music and tapping performance, which suggests a link between musical experience and auditory-motor integration. These results are broadly consistent with previous investigations into the development of beat synchronization across the lifespan, and thus complement existing studies and present new insights offered by a different, large cross-sectional sample.  相似文献   

6.
Rhythms, or patterns in time, play a vital role in both speech and music. Proficiency in a number of rhythm skills has been linked to language ability, suggesting that certain rhythmic processes in music and language rely on overlapping resources. However, a lack of understanding about how rhythm skills relate to each other has impeded progress in understanding how language relies on rhythm processing. In particular, it is unknown whether all rhythm skills are linked together, forming a single broad rhythmic competence, or whether there are multiple dissociable rhythm skills. We hypothesized that beat tapping and rhythm memory/sequencing form two separate clusters of rhythm skills. This hypothesis was tested with a battery of two beat tapping and two rhythm memory tests. Here we show that tapping to a metronome and the ability to adjust to a changing tempo while tapping to a metronome are related skills. The ability to remember rhythms and to drum along to repeating rhythmic sequences are also related. However, we found no relationship between beat tapping skills and rhythm memory skills. Thus, beat tapping and rhythm memory are dissociable rhythmic aptitudes. This discovery may inform future research disambiguating how distinct rhythm competencies track with specific language functions.  相似文献   

7.
Dancing and singing to music involve auditory-motor coordination and have been essential to our human culture since ancient times. Although scholars have been trying to understand the evolutionary and developmental origin of music, early human developmental manifestations of auditory-motor interactions in music have not been fully investigated. Here we report limb movements and vocalizations in three- to four-months-old infants while they listened to music and were in silence. In the group analysis, we found no significant increase in the amount of movement or in the relative power spectrum density around the musical tempo in the music condition compared to the silent condition. Intriguingly, however, there were two infants who demonstrated striking increases in the rhythmic movements via kicking or arm-waving around the musical tempo during listening to music. Monte-Carlo statistics with phase-randomized surrogate data revealed that the limb movements of these individuals were significantly synchronized to the musical beat. Moreover, we found a clear increase in the formant variability of vocalizations in the group during music perception. These results suggest that infants at this age are already primed with their bodies to interact with music via limb movements and vocalizations.  相似文献   

8.
It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates. This observation supports the vocal learning hypothesis that suggests rhythmic entrainment to be a by-product of the vocal learning mechanisms that are shared by several bird and mammal species, including humans, but that are only weakly developed, or missing entirely, in nonhuman primates. To test this hypothesis we measured auditory event-related potentials (ERPs) in two rhesus monkeys (Macaca mulatta), probing a well-documented component in humans, the mismatch negativity (MMN) to study rhythmic expectation. We demonstrate for the first time in rhesus monkeys that, in response to infrequent deviants in pitch that were presented in a continuous sound stream using an oddball paradigm, a comparable ERP component can be detected with negative deflections in early latencies (Experiment 1). Subsequently we tested whether rhesus monkeys can detect gaps (omissions at random positions in the sound stream; Experiment 2) and, using more complex stimuli, also the beat (omissions at the first position of a musical unit, i.e. the ‘downbeat’; Experiment 3). In contrast to what has been shown in human adults and newborns (using identical stimuli and experimental paradigm), the results suggest that rhesus monkeys are not able to detect the beat in music. These findings are in support of the hypothesis that beat induction (the cognitive mechanism that supports the perception of a regular pulse from a varying rhythm) is species-specific and absent in nonhuman primates. In addition, the findings support the auditory timing dissociation hypothesis, with rhesus monkeys being sensitive to rhythmic grouping (detecting the start of a rhythmic group), but not to the induced beat (detecting a regularity from a varying rhythm).  相似文献   

9.

Background

Performance of externally paced rhythmic movements requires brain and behavioral integration of sensory stimuli with motor commands. The underlying brain mechanisms to elaborate beat-synchronized rhythm and polyrhythms that musicians readily perform may differ. Given known roles in perceiving time and repetitive movements, we hypothesized that basal ganglia and cerebellar structures would have greater activation for polyrhythms than for on-the-beat rhythms.

Methodology/Principal Findings

Using functional MRI methods, we investigated brain networks for performing rhythmic movements paced by auditory cues. Musically trained participants performed rhythmic movements at 2 and 3 Hz either at a 1∶1 on-the-beat or with a 3∶2 or a 2∶3 stimulus-movement structure. Due to their prior musical experience, participants performed the 3∶2 or 2∶3 rhythmic movements automatically. Both the isorhythmic 1∶1 and the polyrhythmic 3∶2 or 2∶3 movements yielded the expected activation in contralateral primary motor cortex and related motor areas and ipsilateral cerebellum. Direct comparison of functional MRI signals obtained during 3∶2 or 2∶3 and on-the-beat rhythms indicated activation differences bilaterally in the supplementary motor area, ipsilaterally in the supramarginal gyrus and caudate-putamen and contralaterally in the cerebellum.

Conclusions/Significance

The activated brain areas suggest the existence of an interconnected brain network specific for complex sensory-motor rhythmic integration that might have specificity for elaboration of musical abilities.  相似文献   

10.
Beat gestures—spontaneously produced biphasic movements of the hand—are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world''s languages, how beat gestures impact spoken word recognition is unclear. Can these simple ‘flicks of the hand'' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.  相似文献   

11.
Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners'' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians) or a complex meter (familiar only to Indians). A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase). When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after) the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters.  相似文献   

12.
E Sejdić  Y Fu  A Pak  JA Fairley  T Chau 《PloS one》2012,7(8):e43104
Walking is a complex, rhythmic task performed by the locomotor system. However, natural gait rhythms can be influenced by metronomic auditory stimuli, a phenomenon of particular interest in neurological rehabilitation. In this paper, we examined the effects of aural, visual and tactile rhythmic cues on the temporal dynamics associated with human gait. Data were collected from fifteen healthy adults in two sessions. Each session consisted of five 15-minute trials. In the first trial of each session, participants walked at their preferred walking speed. In subsequent trials, participants were asked to walk to a metronomic beat, provided through visually, aurally, tactile or all three cues (simultaneously and in sync), the pace of which was set to the preferred walking speed of the first trial. Using the collected data, we extracted several parameters including: gait speed, mean stride interval, stride interval variability, scaling exponent and maximum Lyapunov exponent. The extracted parameters showed that rhythmic sensory cues affect the temporal dynamics of human gait. The auditory rhythmic cue had the greatest influence on the gait parameters, while the visual cue had no statistically significant effect on the scaling exponent. These results demonstrate that visual rhythmic cues could be considered as an alternative cueing modality in rehabilitation without concern of adversely altering the statistical persistence of walking.  相似文献   

13.
14.
This study investigated a potential auditory illusion in duration perception induced by rhythmic temporal contexts. Listeners with or without musical training performed a duration discrimination task for a silent period in a rhythmic auditory sequence. The critical temporal interval was presented either within a perceptual group or between two perceptual groups. We report the just-noticeable difference (difference limen, DL) for temporal intervals and the point of subjective equality (PSE) derived from individual psychometric functions based on performance of a two-alternative forced choice task. In musically untrained individuals, equal temporal intervals were perceived as significantly longer when presented between perceptual groups than within a perceptual group (109.25% versus 102.5% of the standard duration). Only the perceived duration of the between-group interval was significantly longer than its objective duration. Musically trained individuals did not show this effect. However, in both musically trained and untrained individuals, the relative difference limens for discriminating the comparison interval from the standard interval were larger in the between-groups condition than in the within-group condition (7.3% vs. 5.6% of the standard duration). Thus, rhythmic grouping affected sensitivity to duration changes in all listeners, with duration differences being harder to detect at boundaries of rhythm groups than within rhythm groups. Our results show for the first time that temporal Gestalt induces auditory duration illusions in typical listeners, but that musical experts are not susceptible to this effect of rhythmic grouping.  相似文献   

15.
Complex rhythms during the interval of a year (those rhythms with more than one peak and one trough) are reviewed for two species, cats and human beings. The quantitative variation during the year of grooming reflexes in the cat represents a systematic change in the expression of integrative action, a concept first formulated by Sherrington. The evaluation of these changes as systematic rather than random is addressed by a simple simulation and by the application of a periodic regression analysis. The synchrony among individuals within a group and the synchrony among groups studied during different years establish that environmental factors control and regulate the complex rhythms. Two hypotheses are presented: 1. The complex rhythms are driven by an environmental variable with the same complex pattern; and 2. the complex rhythms are generated by a complex photoperiodic response curve. These hypotheses are described and examples of the evidence supporting them are presented. The review of the literature establishes that complex multi-modal rhythms exist during the interval of a year; that these may be considered adaptive in providing more than one “window” for producing offspring; that a basic multi-modal variation in the physiological substrate is a common and ubiquitous aspect of temporal order, and may be the rhythmic source of periodic diseases.  相似文献   

16.
Complex rhythms during the interval of a year (those rhythms with more than one peak and one trough) are reviewed for two species, cats and human beings. The quantitative variation during the year of grooming reflexes in the cat represents a systematic change in the expression of integrative action, a concept first formulated by Sherrington. The evaluation of these changes as systematic rather than random is addressed by a simple simulation and by the application of a periodic regression analysis. The synchrony among individuals within a group and the synchrony among groups studied during different years establish that environmental factors control and regulate the complex rhythms. Two hypotheses are presented: 1. The complex rhythms are driven by an environmental variable with the same complex pattern; and 2. the complex rhythms are generated by a complex photoperiodic response curve. These hypotheses are described and examples of the evidence supporting them are presented. The review of the literature establishes that complex multi-modal rhythms exist during the interval of a year; that these may be considered adaptive in providing more than one “window” for producing offspring; that a basic multi-modal variation in the physiological substrate is a common and ubiquitous aspect of temporal order, and may be the rhythmic source of periodic diseases.  相似文献   

17.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound''s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.  相似文献   

18.
Simulation of Rhythmic Tree Growth under Constant Conditions   总被引:1,自引:0,他引:1  
The observed rhythmic growth of trees under relatively uniform environmental conditions has been ascribed by some authors to endogenous factors, by others to slight fluctuations of environmental factors. A model for the simulation of rhythmic growth was developed based on the assumption that endogenous rhythms can result from feedback interaction between two potentially continuous processes, like shoot and root growth, if the slower process is rate limiting for the faster one. Rhythmic growth in trees would be the consequence of feedback mechanisms needed for maintaining a constant shoot: root ratio. Period length of the rhythms depends upon the rates of the growth processes involved. Environmental factors modify period length through affecting growth rates. Growth patterns predicted by the model compare well with growth measurements of tropical trees. The transition from intermittent to continuous growth, as observed under certain conditions, can be simulated by varying a single parameter in the model.  相似文献   

19.
Pulse is often understood as a feature of a (quasi-) isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani) classical music performances (alap, jor and jhala). The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one’s internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects’ internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic) experience and may be an important process supporting ‘social’ effects of temporally regular music.  相似文献   

20.
A common but none the less remarkable human faculty is the ability to recognize and reproduce familiar pieces of music. No two performances of a given piece will ever be acoustically identical, but a listener can perceive, in both, the same rhythmic and tonal relationships, and can judge whether a particular note or phrase was played out of time or out of tune. The problem considered in this lecture is that of describing the conceptual structures by which we represent Western classical music and the processes by which these structures are created. Some new hypotheses about the perception of rhythm and tonality have been cast in the form of a computer program which will transcribe a live keyboard performance of a classical melody into the equivalent of standard musical notation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号