首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文旨在研究音乐训练是否能增强基于音高和空间位置的听觉选择性注意力,以及音乐训练对听觉可塑性的神经机制.听觉感知实验中,受试者根据音高差异或空间位置差异,选择两个同时播放的数字之一.听觉认知实验在安静和噪声环境中播放频率分辨率不同的复合音,记录受试者听觉脑干频率跟随响应(frequency-following responses,FFRs).本文提出分析FFR的四种方法,即包络相关频率跟随响应(envelope-related frequency-following response,FFRENV)的短时锁相值、瞬时相位差极性图、相位差均值矢量以及时间细节结构相关频率跟随响应(temporal-fine-structure-related frequency-following response,FFRTFS)的幅度谱信噪比.实验结果表明,在完成基于音高的任务时,受过音乐训练的受试者准确率更高、反应时间更短.外界噪声不影响两组人群在基频(fundamental frequency,F0)的神经元锁相能力,但是显著降低了谐波处的神经元锁相能力.受过音乐训练的受试者的神经元在基频处的锁相能力和谐波处抗噪能力均增强,且其FFRTFS幅度谱信噪比与基于音高的行为学准确率呈正相关.因此,受过音乐训练的受试者其音高选择性注意感知能力的提高取决于认知神经能力的增强,经过音乐训练后,F0处FFRENV的锁相能力、谐波处FFRTFS的抗噪和持续锁相能力以及谐波处FFRTFS幅度谱信噪比均明显增强.音乐训练对听觉选择性注意具有显著的可塑性.  相似文献   

2.
Growing evidence indicates that syntax and semantics are basic aspects of music. After the onset of a chord, initial music-syntactic processing can be observed at about 150-400 ms and processing of musical semantics at about 300-500 ms. Processing of musical syntax activates inferior frontolateral cortex, ventrolateral premotor cortex and presumably the anterior part of the superior temporal gyrus. These brain structures have been implicated in sequencing of complex auditory information, identification of structural relationships, and serial prediction. Processing of musical semantics appears to activate posterior temporal regions. The processes and brain structures involved in the perception of syntax and semantics in music have considerable overlap with those involved in language perception, underlining intimate links between music and language in the human brain.  相似文献   

3.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.  相似文献   

4.
A common but none the less remarkable human faculty is the ability to recognize and reproduce familiar pieces of music. No two performances of a given piece will ever be acoustically identical, but a listener can perceive, in both, the same rhythmic and tonal relationships, and can judge whether a particular note or phrase was played out of time or out of tune. The problem considered in this lecture is that of describing the conceptual structures by which we represent Western classical music and the processes by which these structures are created. Some new hypotheses about the perception of rhythm and tonality have been cast in the form of a computer program which will transcribe a live keyboard performance of a classical melody into the equivalent of standard musical notation.  相似文献   

5.
This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N = 120) discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.  相似文献   

6.
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.  相似文献   

7.
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.  相似文献   

8.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound''s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.  相似文献   

9.
C Jiang  JP Hamm  VK Lim  IJ Kirk  X Chen  Y Yang 《PloS one》2012,7(7):e41411
Pitch processing is a critical ability on which humans' tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources.  相似文献   

10.
Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system – motherhood – is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.  相似文献   

11.
The present study investigated the possible effects of the electromagnetic field (EMF) emitted by an ordinary GSM mobile phone (902.4 MHz pulsed at 217 Hz) on brainstem auditory processing. Auditory brainstem responses (ABR) were recorded in 17 healthy young adults, without a mobile phone at baseline, and then with a mobile phone on the ear under EMF‐off and EMF‐on conditions. The amplitudes, latencies, and interwave intervals of the main ABR components (waves I, III, V) were compared among the three conditions. ABR waveforms showed no significant differences due to exposure, suggesting that short‐term exposure to mobile phone EMF did not affect the transmission of sensory stimuli from the cochlea up to the midbrain along the auditory nerve and brainstem auditory pathways. Bioelectromagnetics 31:48–55, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

12.
Differences in auditory perception between species are influenced by phylogenetic origin and the perceptual challenges imposed by the natural environment, such as detecting prey- or predator-generated sounds and communication signals. Bats are well suited for comparative studies on auditory perception since they predominantly rely on echolocation to perceive the world, while their social calls and most environmental sounds have low frequencies. We tested if hearing sensitivity and stimulus level coding in bats differ between high and low-frequency ranges by measuring auditory brainstem responses (ABRs) of 86 bats belonging to 11 species. In most species, auditory sensitivity was equally good at both high- and low-frequency ranges, while amplitude was more finely coded for higher frequency ranges. Additionally, we conducted a phylogenetic comparative analysis by combining our ABR data with published data on 27 species. Species-specific peaks in hearing sensitivity correlated with peak frequencies of echolocation calls and pup isolation calls, suggesting that changes in hearing sensitivity evolved in response to frequency changes of echolocation and social calls. Overall, our study provides the most comprehensive comparative assessment of bat hearing capacities to date and highlights the evolutionary pressures acting on their sensory perception.  相似文献   

13.
Hearing dysfunction has been associated with Alzheimer's disease (AD) in humans, but there is little data on the auditory function of mouse models of AD. Furthermore, characterization of hearing ability in mouse models is needed to ensure that tests of cognition that use auditory stimuli are not confounded by hearing dysfunction. Therefore, we assessed acoustic startle response and pre‐pulse inhibition in the double transgenic 5xFAD mouse model of AD from 3–4 to 16 months of age. The 5xFAD mice showed an age‐related decline in acoustic startle as early as 3–4 months of age. We subsequently tested auditory brainstem response (ABR) thresholds at 4 and 13–14 months of age using tone bursts at frequencies of 2–32 kHz. The 5xFAD mice showed increased ABR thresholds for tone bursts between 8 and 32 kHz at 13–14 months of age. Finally, cochleae were extracted and basilar membranes were dissected to count hair cell loss across the cochlea. The 5xFAD mice showed significantly greater loss of both inner and outer hair cells at the apical and basal ends of the basilar membrane than wild‐type mice at 15–16 months of age. These results indicate that the 5xFAD mouse model of AD shows age‐related decreases in acoustic startle responses, which are at least partially due to age‐related peripheral hearing loss. Therefore, we caution against the use of cognitive tests that rely on audition in 5xFAD mice over 3–4 months of age, without first confirming that performance is not confounded by hearing dysfunction.  相似文献   

14.
Auditory evoked potentials to speech (Speech auditory brainstem response [Speech ABR]) are a non-invasive way to investigate neurophysiological activity, at the level of the brainstem. The Speech ABR precise neurophyiological generators remain poorly defined. However, latencies and low-pass spectrum both suggest that these generators might lie in the upper brainstem (roughly between the cochlear nucleus and the inferior colliculus). Having considered the particular functional pattern of cells along the auditory pathway, specific stimuli have been synthesized to make out the acoustic sensitivity of Speech ABR components. Accordingly, hypotheses have been made on the probable neurophysiological generators, most likely to have elicited both Speech ABR components: onset response and frequency following response. Speech ABR have been recorded to pure tones, harmonic complex tones, /ba/ and /pa/ syllables, and their analogues (calculated as a sum of five weighted sine waves at the formant frequencies and amplitudes, and modulated by the syllables temporal envelopes). In addition, the Auditory Image Model (Patterson et al., 1995 [17]), simulating the neural activity at the auditory periphery, i.e. inferior colliculus input, suggests that both analogues and syllables elicit the same amount of energy, in contrast to the recorded FFR. This contradiction means that the neurophysiological signal processing leading to FFR is made beyond auditory periphery. Indeed, FFR synchronisation on F0 seems to be the result of an overall processing of the whole stimulus spectrum. This behaviour reminds the functional characteristics of disc-shape cells in the inferior colliculus, as described in a previous study of physiological periodicity coding (Periodicity analysis network, Voutsas et al., 2005 [42]).  相似文献   

15.
Ward LM  MacLean SE  Kirschner A 《PloS one》2010,5(12):e14371
Neural synchronization is a mechanism whereby functionally specific brain regions establish transient networks for perception, cognition, and action. Direct addition of weak noise (fast random fluctuations) to various neural systems enhances synchronization through the mechanism of stochastic resonance (SR). Moreover, SR also occurs in human perception, cognition, and action. Perception, cognition, and action are closely correlated with, and may depend upon, synchronized oscillations within specialized brain networks. We tested the hypothesis that SR-mediated neural synchronization occurs within and between functionally relevant brain areas and thus could be responsible for behavioral SR. We measured the 40-Hz transient response of the human auditory cortex to brief pure tones. This response arises when the ongoing, random-phase, 40-Hz activity of a group of tuned neurons in the auditory cortex becomes synchronized in response to the onset of an above-threshold sound at its "preferred" frequency. We presented a stream of near-threshold standard sounds in various levels of added broadband noise and measured subjects' 40-Hz response to the standards in a deviant-detection paradigm using high-density EEG. We used independent component analysis and dipole fitting to locate neural sources of the 40-Hz response in bilateral auditory cortex, left posterior cingulate cortex and left superior frontal gyrus. We found that added noise enhanced the 40-Hz response in all these areas. Moreover, added noise also increased the synchronization between these regions in alpha and gamma frequency bands both during and after the 40-Hz response. Our results demonstrate neural SR in several functionally specific brain regions, including areas not traditionally thought to contribute to the auditory 40-Hz transient response. In addition, we demonstrated SR in the synchronization between these brain regions. Thus, both intra- and inter-regional synchronization of neural activity are facilitated by the addition of moderate amounts of random noise. Because the noise levels in the brain fluctuate with arousal system activity, particularly across sleep-wake cycles, optimal neural noise levels, and thus SR, could be involved in optimizing the formation of task-relevant brain networks at several scales under normal conditions.  相似文献   

16.
The musician''s brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general.  相似文献   

17.
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.  相似文献   

18.

Background

Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality.

Methodology/Principal Findings

Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians.

Conclusions/Significance

There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.  相似文献   

19.
The matched filter hypothesis proposes that the auditory sensitivity of receivers should match the spectral energy distribution of the senders’ signals. If so, receivers should be able to distinguish between species-specific and hetero-specific signals. We tested the matched filter hypothesis in two sympatric species, Chiromantis doriae and Feihyla vittata, whose calls exhibit similar frequency characters and that overlap in the breeding season and microenvironment. For both species, we recorded male calls and measured the auditory sensitivity of both sexes using the auditory brainstem response (ABR). We compared the auditory sensitivity with the spectral energy distribution of the calls of each species and found that (1) auditory sensitivity matched the signal spectrogram in C. doriae and F. vittata; (2) the concordance conformed better to the conspecific signal versus the hetero-specific signal. In addition, our results show that species differences are larger than sex differences for ABR audiograms.  相似文献   

20.
Sanes DH  Woolley SM 《Neuron》2011,72(6):912-929
The auditory CNS is influenced profoundly by sounds heard during development. Auditory deprivation and?augmented sound exposure can each perturb the maturation of neural computations as well as their underlying synaptic properties. However, we have learned little about the emergence of perceptual skills in these same model systems, and especially how perception is influenced by early acoustic experience. Here, we argue that developmental studies must take greater advantage of behavioral benchmarks. We?discuss quantitative measures of perceptual development and suggest how they can play a much larger role in guiding experimental design. Most importantly, including behavioral measures will allow us to establish empirical connections among environment, neural development, and perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号