首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.  相似文献   

2.
A subset of neurons in the cochlear nucleus (CN) of the auditory brainstem has the ability to enhance the auditory nerve''s temporal representation of stimulating sounds. These neurons reside in the ventral region of the CN (VCN) and are usually known as highly synchronized, or high-sync, neurons. Most published reports about the existence and properties of high-sync neurons are based on recordings performed on a VCN output tract—not the VCN itself—of cats. In other species, comprehensive studies detailing the properties of high-sync neurons, or even acknowledging their existence, are missing.Examination of the responses of a population of VCN neurons in chinchillas revealed that a subset of those neurons have temporal properties similar to high-sync neurons in the cat. Phase locking and entrainment—the ability of a neuron to fire action potentials at a certain stimulus phase and at almost every stimulus period, respectively—have similar maximum values in cats and chinchillas. Ranges of characteristic frequencies for high-sync neurons in chinchillas and cats extend up to 600 and 1000 Hz, respectively. Enhancement of temporal processing relative to auditory nerve fibers (ANFs), which has been shown previously in cats using tonal and white-noise stimuli, is also demonstrated here in the responses of VCN neurons to synthetic and spoken vowel sounds.Along with the large amount of phase locking displayed by some VCN neurons there occurs a deterioration in the spectral representation of the stimuli (tones or vowels). High-sync neurons exhibit a greater distortion in their responses to tones or vowels than do other types of VCN neurons and auditory nerve fibers.Standard deviations of first-spike latency measured in responses of high-sync neurons are lower than similar values measured in ANFs'' responses. This might indicate a role of high-sync neurons in other tasks beyond sound localization.  相似文献   

3.
Circulating adult testosterone levels, digit ratio (length of the second finger relative to the fourth finger), and directional asymmetry in digit ratio are considered sexually dimorphic traits in humans. These have been related to spatial abilities in men and women, and because similar brain structures appear to be involved in both spatial and musical abilities, neuroendocrine function may be related to musical as well as spatial cognition. To evaluate relationships among testosterone and musical ability in men and women, saliva samples were collected, testosterone concentrations assessed, and digit ratios calculated using standardized protocols in a sample of university students (N = 61), including both music and non-music majors. Results of Spearman correlations suggest that digit ratio and testosterone levels are statistically related to musical aptitude and performance only within the female sample: A) those females with greater self-reported history of exposure to music (p = 0.016) and instrument proficiency (p = 0.040) scored higher on the Advanced Measures of Music Audiation test, B) those females with higher left hand digit ratio (and perhaps lower fetal testosterone levels) were more highly ranked (p = 0.007) in the orchestra, C) female music students exhibited a trend (p = 0.082) towards higher testosterone levels compared to female non-music students, and D) female music students with higher rank in the orchestra/band had higher testosterone levels (p = 0.003) than lower ranked students. None of these relationships were significant in the male sample, although a lack of statistical power may be one cause. The effects of testosterone are likely a small part of a poorly understood system of biological and environmental stimuli that contribute to musical aptitude. Hormones may play some role in modulating the phenotype of musical ability, and this may be the case for females more so than males.  相似文献   

4.

Background

The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired.

Methods

Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing.

Conclusions

Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.  相似文献   

5.
Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.  相似文献   

6.
A common approach for determining musical competence is to rely on information about individuals’ extent of musical training, but relying on musicianship status fails to identify musically untrained individuals with musical skill, as well as those who, despite extensive musical training, may not be as skilled. To counteract this limitation, we developed a new test battery (Profile of Music Perception Skills; PROMS) that measures perceptual musical skills across multiple domains: tonal (melody, pitch), qualitative (timbre, tuning), temporal (rhythm, rhythm-to-melody, accent, tempo), and dynamic (loudness). The PROMS has satisfactory psychometric properties for the composite score (internal consistency and test-retest r>.85) and fair to good coefficients for the individual subtests (.56 to.85). Convergent validity was established with the relevant dimensions of Gordon’s Advanced Measures of Music Audiation and Musical Aptitude Profile (melody, rhythm, tempo), the Musical Ear Test (rhythm), and sample instrumental sounds (timbre). Criterion validity was evidenced by consistently sizeable and significant relationships between test performance and external musical proficiency indicators in all three studies (.38 to.62, p<.05 to p<.01). An absence of correlations between test scores and a nonmusical auditory discrimination task supports the battery’s discriminant validity (−.05, ns). The interrelationships among the various subtests could be accounted for by two higher order factors, sequential and sensory music processing. A brief version of the full PROMS is introduced as a time-efficient approximation of the full version of the battery.  相似文献   

7.

Objectives

Few studies have prospectively investigated associations of child cognitive ability and behavioural difficulties with later eating attitudes. We investigated associations of intelligence quotient (IQ), academic performance and behavioural difficulties at 6.5 years with eating attitudes five years later.

Methods

We conducted an observational cohort study nested within the Promotion of Breastfeeding Intervention Trial, Belarus. Of 17,046 infants enrolled at birth, 13,751 (80.7%) completed the Children''s Eating Attitude Test (ChEAT) at 11.5 years, most with information on IQ (n = 12,667), academic performance (n = 9,954) and behavioural difficulties (n = 11,098) at 6.5 years. The main outcome was a ChEAT score ≥85th percentile, indicative of problematic eating attitudes.

Results

Boys with higher IQ at 6.5 years reported fewer problematic eating attitudes, as assessed by ChEAT scores ≥85th percentile, at 11.5 years (OR per SD increase in full-scale IQ = 0.87; 0.79, 0.94). No such association was observed in girls (1.01; 0.93, 1.10) (p for sex-interaction = 0.016). In both boys and girls, teacher-assessed academic performance in non-verbal subjects was inversely associated with high ChEAT scores five years later (OR per unit increase in mathematics ability = 0.88; 0.82, 0.94; and OR per unit increase in ability for other non-verbal subjects = 0.86; 0.79, 0.94). Behavioural difficulties were positively associated with high ChEAT scores five years later (OR per SD increase in teacher-assessed rating = 1.13; 1.07, 1.19).

Conclusion

Lower IQ, worse non-verbal academic performance and behavioural problems at early school age are positively associated with risk of problematic eating attitudes in early adolescence.  相似文献   

8.
C Jiang  JP Hamm  VK Lim  IJ Kirk  X Chen  Y Yang 《PloS one》2012,7(7):e41411
Pitch processing is a critical ability on which humans' tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources.  相似文献   

9.
Musical expertise is associated with structural and functional changes in the brain that underlie facilitated auditory perception. We investigated whether the phase locking (PL) and amplitude modulations (AM) of neuronal oscillations in response to musical chords are correlated with musical expertise and whether they reflect the prototypicality of chords in Western tonal music. To this aim, we recorded magnetoencephalography (MEG) while musicians and non-musicians were presented with common prototypical major and minor chords, and with uncommon, non-prototypical dissonant and mistuned chords, while watching a silenced movie. We then analyzed the PL and AM of ongoing oscillations in the theta (4–8 Hz) alpha (8–14 Hz), beta- (14–30 Hz) and gamma- (30–80 Hz) bands to these chords. We found that musical expertise was associated with strengthened PL of ongoing oscillations to chords over a wide frequency range during the first 300 ms from stimulus onset, as opposed to increased alpha-band AM to chords over temporal MEG channels. In musicians, the gamma-band PL was strongest to non-prototypical compared to other chords, while in non-musicians PL was strongest to minor chords. In both musicians and non-musicians the long-latency (> 200 ms) gamma-band PL was also sensitive to chord identity, and particularly to the amplitude modulations (beats) of the dissonant chord. These findings suggest that musical expertise modulates oscillation PL to musical chords and that the strength of these modulations is dependent on chord prototypicality.  相似文献   

10.
Human-machine interface (HMI) designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis). Participants trained on 3 targets in sessions 1–3 and were tested on 3 novel targets in session 4. An “established categories with text cues” group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An “established categories without text cues” group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A “new categories” group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group), and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel) targets with an unambiguous cue.  相似文献   

11.

Background

Auditory laterality is suggested to be characterized by a left hemisphere dominance for the processing of conspecific communication. Nevertheless, there are indications that auditory laterality can also be affected by communicative significance, emotional valence and social recognition.

Methodology/Principal Findings

In order to gain insight into the effects of caller characteristics on auditory laterality in the early primate brain, 17 gray mouse lemurs were tested in a head turn paradigm. The head turn paradigm was established to examine potential functional hemispheric asymmetries on the behavioral level. Subjects were presented with playbacks of two conspecific call types (tsak calls and trill calls) from senders differing in familiarity (unfamiliar vs. familiar) and sex (same sex vs. other sex). Based on the head turn direction towards these calls, evidence was found for a right ear/left hemisphere dominance for the processing of calls of the other sex (Binomial test: p = 0.021, N = 10). Familiarity had no effect on the orientation biases.

Conclusions/Significance

The findings in this study support the growing consensus that auditory laterality is not only determined by the acoustic processing of conspecific communication, but also by other factors like the sex of the sender.  相似文献   

12.
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.  相似文献   

13.
The acquisition of letter-speech sound associations is one of the basic requirements for fluent reading acquisition and its failure may contribute to reading difficulties in developmental dyslexia. Here we investigated event-related potential (ERP) measures of letter-speech sound integration in 9-year-old typical and dyslexic readers and specifically test their relation to individual differences in reading fluency. We employed an audiovisual oddball paradigm in typical readers (n = 20), dysfluent (n = 18) and severely dysfluent (n = 18) dyslexic children. In one auditory and two audiovisual conditions the Dutch spoken vowels/a/and/o/were presented as standard and deviant stimuli. In audiovisual blocks, the letter ‘a’ was presented either simultaneously (AV0), or 200 ms before (AV200) vowel sound onset. Across the three children groups, vowel deviancy in auditory blocks elicited comparable mismatch negativity (MMN) and late negativity (LN) responses. In typical readers, both audiovisual conditions (AV0 and AV200) led to enhanced MMN and LN amplitudes. In both dyslexic groups, the audiovisual LN effects were mildly reduced. Most interestingly, individual differences in reading fluency were correlated with MMN latency in the AV0 condition. A further analysis revealed that this effect was driven by a short-lived MMN effect encompassing only the N1 window in severely dysfluent dyslexics versus a longer MMN effect encompassing both the N1 and P2 windows in the other two groups. Our results confirm and extend previous findings in dyslexic children by demonstrating a deficient pattern of letter-speech sound integration depending on the level of reading dysfluency. These findings underscore the importance of considering individual differences across the entire spectrum of reading skills in addition to group differences between typical and dyslexic readers.  相似文献   

14.
Musical imagery is a relatively unexplored area, partly because of deficiencies in existing experimental paradigms, which are often difficult, unreliable, or do not provide objective measures of performance. Here we describe a novel protocol, the Pitch Imagery Arrow Task (PIAT), which induces and trains pitch imagery in both musicians and non-musicians. Given a tonal context and an initial pitch sequence, arrows are displayed to elicit a scale-step sequence of imagined pitches, and participants indicate whether the final imagined tone matches an audible probe. It is a staircase design that accommodates individual differences in musical experience and imagery ability. This new protocol was used to investigate the roles that musical expertise, self-reported auditory vividness and mental control play in imagery performance. Performance on the task was significantly better for participants who employed a musical imagery strategy compared to participants who used an alternative cognitive strategy and positively correlated with scores on the Control subscale from the Bucknell Auditory Imagery Scale (BAIS). Multiple regression analysis revealed that Imagery performance accuracy was best predicted by a combination of strategy use and scores on the Vividness subscale of BAIS. These results confirm that competent performance on the PIAT requires active musical imagery and is very difficult to achieve using alternative cognitive strategies. Auditory vividness and mental control were more important than musical experience in the ability to perform manipulation of pitch imagery.  相似文献   

15.
Artistic creativity forms the basis of music culture and music industry. Composing, improvising and arranging music are complex creative functions of the human brain, which biological value remains unknown. We hypothesized that practicing music is social communication that needs musical aptitude and even creativity in music. In order to understand the neurobiological basis of music in human evolution and communication we analyzed polymorphisms of the arginine vasopressin receptor 1A (AVPR1A), serotonin transporter (SLC6A4), catecol-O-methyltranferase (COMT), dopamin receptor D2 (DRD2) and tyrosine hydroxylase 1 (TPH1), genes associated with social bonding and cognitive functions in 19 Finnish families (n = 343 members) with professional musicians and/or active amateurs. All family members were tested for musical aptitude using the auditory structuring ability test (Karma Music test; KMT) and Carl Seashores tests for pitch (SP) and for time (ST). Data on creativity in music (composing, improvising and/or arranging music) was surveyed using a web-based questionnaire. Here we show for the first time that creative functions in music have a strong genetic component (h2 = .84; composing h2 = .40; arranging h2 = .46; improvising h2 = .62) in Finnish multigenerational families. We also show that high music test scores are significantly associated with creative functions in music (p<.0001). We discovered an overall haplotype association with AVPR1A gene (markers RS1 and RS3) and KMT (p = 0.0008; corrected p = 0.00002), SP (p = 0.0261; corrected p = 0.0072) and combined music test scores (COMB) (p = 0.0056; corrected p = 0.0006). AVPR1A haplotype AVR+RS1 further suggested a positive association with ST (p = 0.0038; corrected p = 0.00184) and COMB (p = 0.0083; corrected p = 0.0040) using haplotype-based association test HBAT. The results suggest that the neurobiology of music perception and production is likely to be related to the pathways affecting intrinsic attachment behavior.  相似文献   

16.
Strelnikov K  Barone P 《PloS one》2012,7(3):e33462
This article uses the ideas of neuroenergetic and neural field theories to detect stimulation-driven energy flows in the brain during face and auditory word processing. In this analysis, energy flows are thought to create the stable gradients of the fMRI weighted summary images. The sources, from which activity spreads in the brain during face processing, were detected in the occipital cortex. The following direction of energy flows in the frontal cortex was described: the right inferior frontal = >the left inferior frontal = >the triangular part of the left inferior frontal cortex = >the left operculum. In the left operculum, a localized circuit was described. For auditory word processing, the sources of activity flows were detected bilaterally in the middle superior temporal regions, they were also detected in the left posterior superior temporal cortex. Thus, neuroenergetic assumptions may give a novel perspective for the analysis of neuroimaging data.  相似文献   

17.
Although infant speech perception in often studied in isolated modalities, infants'' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants’ sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants’ looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.  相似文献   

18.
A common hypothesis to explain the effect of litter mixing is based on the difference in litter N content between mixed species. Although many studies have shown that litter of invasive non-native plants typically has higher N content than that of native plants in the communities they invade, there has been surprisingly little study of mixing effects during plant invasions. We address this question in south China where Mikania micrantha H.B.K., a non-native vine, with high litter N content, has invaded many forested ecosystems. We were specifically interested in whether this invader accelerated decomposition and how the strength of the litter mixing effect changes with the degree of invasion and over time during litter decomposition. Using litterbags, we evaluated the effect of mixing litter of M. micrantha with the litter of 7 native resident plants, at 3 ratios: M1 (1∶4, = exotic:native litter), M2 (1∶1) and M3 (4∶1, = exotic:native litter) over three incubation periods. We compared mixed litter with unmixed litter of the native species to identify if a non-additive effect of mixing litter existed. We found that there were positive significant non-additive effects of litter mixing on both mass loss and nutrient release. These effects changed with native species identity, mixture ratio and decay times. Overall the greatest accelerations of mixture decay and N release tended to be in the highest degree of invasion (mix ratio M3) and during the middle and final measured stages of decomposition. Contrary to expectations, the initial difference in litter N did not explain species differences in the effect of mixing but overall it appears that invasion by M. micrantha is accelerating the decomposition of native species litter. This effect on a fundamental ecosystem process could contribute to higher rates of nutrient turnover in invaded ecosystems.  相似文献   

19.
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.  相似文献   

20.
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号