首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.  相似文献   

2.
Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.  相似文献   

3.
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.  相似文献   

4.
Musical competence may confer cognitive advantages that extend beyond processing of familiar musical sounds. Behavioural evidence indicates a general enhancement of both working memory and attention in musicians. It is possible that musicians, due to their training, are better able to maintain focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD) activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally enhanced cognition. All participants easily distinguished the stimuli. We tested the hypothesis that musicians nonetheless would perform better, and that differential brain activity would mainly be present in cortical areas involved in cognitive control such as the lateral prefrontal cortex. The musicians performed better as reflected in reaction times and error rates. Musicians also had larger BOLD responses than non-musicians in neuronal networks that sustain attention and cognitive control, including regions of the lateral prefrontal cortex, lateral parietal cortex, insula, and putamen in the right hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task. The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may be a consequence of focused musical training.  相似文献   

5.
本文旨在研究音乐训练是否能增强基于音高和空间位置的听觉选择性注意力,以及音乐训练对听觉可塑性的神经机制.听觉感知实验中,受试者根据音高差异或空间位置差异,选择两个同时播放的数字之一.听觉认知实验在安静和噪声环境中播放频率分辨率不同的复合音,记录受试者听觉脑干频率跟随响应(frequency-following responses,FFRs).本文提出分析FFR的四种方法,即包络相关频率跟随响应(envelope-related frequency-following response,FFRENV)的短时锁相值、瞬时相位差极性图、相位差均值矢量以及时间细节结构相关频率跟随响应(temporal-fine-structure-related frequency-following response,FFRTFS)的幅度谱信噪比.实验结果表明,在完成基于音高的任务时,受过音乐训练的受试者准确率更高、反应时间更短.外界噪声不影响两组人群在基频(fundamental frequency,F0)的神经元锁相能力,但是显著降低了谐波处的神经元锁相能力.受过音乐训练的受试者的神经元在基频处的锁相能力和谐波处抗噪能力均增强,且其FFRTFS幅度谱信噪比与基于音高的行为学准确率呈正相关.因此,受过音乐训练的受试者其音高选择性注意感知能力的提高取决于认知神经能力的增强,经过音乐训练后,F0处FFRENV的锁相能力、谐波处FFRTFS的抗噪和持续锁相能力以及谐波处FFRTFS幅度谱信噪比均明显增强.音乐训练对听觉选择性注意具有显著的可塑性.  相似文献   

6.
Using magnetoencephalography (MEG), we investigated the influence of long term musical training on the processing of partly imagined tone patterns (imagery condition) compared to the same perceived patterns (perceptual condition). The magnetic counterpart of the mismatch negativity (MMNm) was recorded and compared between musicians and non-musicians in order to assess the effect of musical training on the detection of deviants to tone patterns. The results indicated a clear MMNm in the perceptual condition as well as in a simple pitch oddball (control) condition in both groups. However, there was no significant mismatch response in either group in the imagery condition despite above chance behavioral performance in the task of detecting deviant tones. The latency and the laterality of the MMNm in the perceptual condition differed significantly between groups, with an earlier MMNm in musicians, especially in the left hemisphere. In contrast the MMNm amplitudes did not differ significantly between groups. The behavioral results revealed a clear effect of long-term musical training in both experimental conditions. The obtained results represent new evidence that the processing of tone patterns is faster and more strongly lateralized in musically trained subjects, which is consistent with other findings in different paradigms of enhanced auditory neural system functioning due to long-term musical training.  相似文献   

7.
The diagnosis of tinnitus relies on self-report. Psychoacoustic measurements of tinnitus pitch and loudness are essential for assessing claims and discriminating true from false ones. For this reason, the quantification of tinnitus remains a challenging research goal. We aimed to: (1) assess the precision of a new tinnitus likeness rating procedure with a continuous-pitch presentation method, controlling for music training, and (2) test whether tinnitus psychoacoustic measurements have the sensitivity and specificity required to detect people faking tinnitus. Musicians and non-musicians with tinnitus, as well as simulated malingerers without tinnitus, were tested. Most were retested several weeks later. Tinnitus pitch matching was first assessed using the likeness rating method: pure tones from 0.25 to 16 kHz were presented randomly to participants, who had to rate the likeness of each tone to their tinnitus, and to adjust its level from 0 to 100 dB SPL. Tinnitus pitch matching was then assessed with a continuous-pitch method: participants had to match the pitch of their tinnitus to an external tone by moving their finger across a touch-sensitive strip, which generated a continuous pure tone from 0.5 to 20 kHz in 1-Hz steps. The predominant tinnitus pitch was consistent across both methods for both musicians and non-musicians, although musicians displayed better external tone pitch matching abilities. Simulated malingerers rated loudness much higher than did the other groups with a high degree of specificity (94.4%) and were unreliable in loudness (not pitch) matching from one session to the other. Retest data showed similar pitch matching responses for both methods for all participants. In conclusion, tinnitus pitch and loudness reliably correspond to the tinnitus percept, and psychoacoustic loudness matches are sensitive and specific to the presence of tinnitus.  相似文献   

8.
The current study investigates whether long-term music training and practice are associated with enhancement of general cognitive abilities in late middle-aged to older adults. Professional musicians and non-musicians who were matched on age, education, vocabulary, and general health were compared on a near-transfer task involving auditory processing and on far-transfer tasks that measured spatial span and aspects of cognitive control. Musicians outperformed non-musicians on the near-transfer task, on most but not all of the far-transfer tasks, and on a composite measure of cognitive control. The results suggest that sustained music training or involvement is associated with improved aspects of cognitive functioning in older adults.  相似文献   

9.
We measured characteristics of evoked potentials, EPs, developing after presentation of significant tonal acoustic stimuli in subjects systematically engaged in music training (n = 7) and those having no corresponding experience (n = 10). The peak latencies of the P3 component in the left hemisphere of musicians were significantly shorter than those in non-musicians (on average, 279.9 and 310.2 msec, respectively). Musicians demonstrated no interhemisphere differences of the latencies of components N2, P3, and N3, while a trend toward asymmetry was obvious in non-musicians (the above components were generated somewhat later in the left hemisphere). The amplitudes of EP components demonstrated no significant intergroup differences, but the amplitude of the P3 wave was higher in the left hemisphere of non-musicians than that in the right hemisphere. Possible neurophysiological correlates of the observed specificity of EPs in the examined groups are discussed.  相似文献   

10.
Perfect pitch, also known as absolute pitch (AP), refers to the rare ability to identify or produce a musical tone correctly without the benefit of an external reference. AP is often considered to reflect musical giftedness, but it has also been associated with certain disabilities due to increased prevalence of AP in individuals with sensory and developmental disorders. Here, we determine whether individual autistic traits are present in people with AP. We quantified subclinical levels of autism traits using the Autism-Spectrum Quotient (AQ) in three matched groups of subjects: 16 musicians with AP (APs), 18 musicians without AP (non-APs), and 16 non-musicians. In addition, we measured AP ability by a pitch identification test with sine wave tones and piano tones. We found a significantly higher degree of autism traits in APs than in non-APs and non-musicians, and autism scores were significantly correlated with pitch identification scores (r = .46, p = .003). However, our results showed that APs did not differ from non-APs on diagnostically crucial social and communicative domain scores and their total AQ scores were well below clinical thresholds for autism. Group differences emerged on the imagination and attention switching subscales of the AQ. Thus, whilst these findings do link AP with autism, they also show that AP ability is most strongly associated with personality traits that vary widely within the normal population.  相似文献   

11.

Background

The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired.

Methods

Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing.

Conclusions

Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.  相似文献   

12.
Musical expertise is associated with structural and functional changes in the brain that underlie facilitated auditory perception. We investigated whether the phase locking (PL) and amplitude modulations (AM) of neuronal oscillations in response to musical chords are correlated with musical expertise and whether they reflect the prototypicality of chords in Western tonal music. To this aim, we recorded magnetoencephalography (MEG) while musicians and non-musicians were presented with common prototypical major and minor chords, and with uncommon, non-prototypical dissonant and mistuned chords, while watching a silenced movie. We then analyzed the PL and AM of ongoing oscillations in the theta (4–8 Hz) alpha (8–14 Hz), beta- (14–30 Hz) and gamma- (30–80 Hz) bands to these chords. We found that musical expertise was associated with strengthened PL of ongoing oscillations to chords over a wide frequency range during the first 300 ms from stimulus onset, as opposed to increased alpha-band AM to chords over temporal MEG channels. In musicians, the gamma-band PL was strongest to non-prototypical compared to other chords, while in non-musicians PL was strongest to minor chords. In both musicians and non-musicians the long-latency (> 200 ms) gamma-band PL was also sensitive to chord identity, and particularly to the amplitude modulations (beats) of the dissonant chord. These findings suggest that musical expertise modulates oscillation PL to musical chords and that the strength of these modulations is dependent on chord prototypicality.  相似文献   

13.
Musical imagery is a relatively unexplored area, partly because of deficiencies in existing experimental paradigms, which are often difficult, unreliable, or do not provide objective measures of performance. Here we describe a novel protocol, the Pitch Imagery Arrow Task (PIAT), which induces and trains pitch imagery in both musicians and non-musicians. Given a tonal context and an initial pitch sequence, arrows are displayed to elicit a scale-step sequence of imagined pitches, and participants indicate whether the final imagined tone matches an audible probe. It is a staircase design that accommodates individual differences in musical experience and imagery ability. This new protocol was used to investigate the roles that musical expertise, self-reported auditory vividness and mental control play in imagery performance. Performance on the task was significantly better for participants who employed a musical imagery strategy compared to participants who used an alternative cognitive strategy and positively correlated with scores on the Control subscale from the Bucknell Auditory Imagery Scale (BAIS). Multiple regression analysis revealed that Imagery performance accuracy was best predicted by a combination of strategy use and scores on the Vividness subscale of BAIS. These results confirm that competent performance on the PIAT requires active musical imagery and is very difficult to achieve using alternative cognitive strategies. Auditory vividness and mental control were more important than musical experience in the ability to perform manipulation of pitch imagery.  相似文献   

14.
Whereas the use of discrete pitch intervals is characteristic of most musical traditions, the size of the intervals and the way in which they are used is culturally specific. Here we examine the hypothesis that these differences arise because of a link between the tonal characteristics of a culture's music and its speech. We tested this idea by comparing pitch intervals in the traditional music of three tone language cultures (Chinese, Thai and Vietnamese) and three non-tone language cultures (American, French and German) with pitch intervals between voiced speech segments. Changes in pitch direction occur more frequently and pitch intervals are larger in the music of tone compared to non-tone language cultures. More frequent changes in pitch direction and larger pitch intervals are also apparent in the speech of tone compared to non-tone language cultures. These observations suggest that the different tonal preferences apparent in music across cultures are closely related to the differences in the tonal characteristics of voiced speech.  相似文献   

15.
Numerous speech processing techniques have been applied to assist hearing-impaired subjects with extreme high-frequency hearing losses who can be helped only to a limited degree with conventional hearing aids. The results of providing this class of deaf subjects with a speech encoding hearing aid, which is able to reproduce intelligible speech for their particular needs, have generally been disappointing. There are at least four problems related to bandwidth compression applied to the voiced portion of speech: (1) the problem of pitch extraction in real time; (2) pitch extraction under realistic listening conditions, i.e. when competing speech and noise sources are present; (3) an insufficient data base for successful compression of voiced speech; and (4) the introduction of undesirable spectral energies in the bandwidth-compressed signal, due to the compression process itself. Experiments seem to indicate that voiced speech segments bandwidth limited to f = 1000 Hz, even at a loss of higher formant frequencies, is in most instances superior in intelligibility compared to bandwidth-compressed voiced speech segments of the same bandwidth, even if pitch can be extracted with no error. With the added complexity of real-time pitch extraction which has to function in actual listening conditions, it is doubtful that a speech encoding hearing aid, based on bandwidth compression on the voiced portion of speech, could be successfully implemented. However, if bandwidth compression is applied to the unvoiced portions of speech only, the above limitations can be overcome (1).  相似文献   

16.
Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant’s imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.  相似文献   

17.
Luo C  Guo ZW  Lai YX  Liao W  Liu Q  Kendrick KM  Yao DZ  Li H 《PloS one》2012,7(5):e36568
A number of previous studies have examined music-related plasticity in terms of multi-sensory and motor integration but little is known about the functional and effective connectivity patterns of spontaneous intrinsic activity in these systems during the resting state in musicians. Using functional connectivity and Granger causal analysis, functional and effective connectivity among the motor and multi-sensory (visual, auditory and somatosensory) cortices were evaluated using resting-state functional magnetic resonance imaging (fMRI) in musicians and non-musicians. The results revealed that functional connectivity was significantly increased in the motor and multi-sensory cortices of musicians. Moreover, the Granger causality results demonstrated a significant increase outflow-inflow degree in the auditory cortex with the strongest causal outflow pattern of effective connectivity being found in musicians. These resting state fMRI findings indicate enhanced functional integration among the lower-level perceptual and motor networks in musicians, and may reflect functional consolidation (plasticity) resulting from long-term musical training, involving both multi-sensory and motor functional integration.  相似文献   

18.

Background

Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy.

Methodology/Principal Findings

We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing.

Conclusions/Significance

Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production.  相似文献   

19.
Executive functions (EF) are cognitive capacities that allow for planned, controlled behavior and strongly correlate with academic abilities. Several extracurricular activities have been shown to improve EF, however, the relationship between musical training and EF remains unclear due to methodological limitations in previous studies. To explore this further, two experiments were performed; one with 30 adults with and without musical training and one with 27 musically trained and untrained children (matched for general cognitive abilities and socioeconomic variables) with a standardized EF battery. Furthermore, the neural correlates of EF skills in musically trained and untrained children were investigated using fMRI. Adult musicians compared to non-musicians showed enhanced performance on measures of cognitive flexibility, working memory, and verbal fluency. Musically trained children showed enhanced performance on measures of verbal fluency and processing speed, and significantly greater activation in pre-SMA/SMA and right VLPFC during rule representation and task-switching compared to musically untrained children. Overall, musicians show enhanced performance on several constructs of EF, and musically trained children further show heightened brain activation in traditional EF regions during task-switching. These results support the working hypothesis that musical training may promote the development and maintenance of certain EF skills, which could mediate the previously reported links between musical training and enhanced cognitive skills and academic achievement.  相似文献   

20.
The musician''s brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号