首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.  相似文献   

2.

Background

There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.

Methodology/Principal Findings

This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.

Conclusions/Significance

These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.  相似文献   

3.

Background

Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials.

Methodology

Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets.

Conclusions

Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required.  相似文献   

4.
Growing evidence indicates that syntax and semantics are basic aspects of music. After the onset of a chord, initial music-syntactic processing can be observed at about 150-400 ms and processing of musical semantics at about 300-500 ms. Processing of musical syntax activates inferior frontolateral cortex, ventrolateral premotor cortex and presumably the anterior part of the superior temporal gyrus. These brain structures have been implicated in sequencing of complex auditory information, identification of structural relationships, and serial prediction. Processing of musical semantics appears to activate posterior temporal regions. The processes and brain structures involved in the perception of syntax and semantics in music have considerable overlap with those involved in language perception, underlining intimate links between music and language in the human brain.  相似文献   

5.
Odor context can affect the recognition of facial expressions. However, there is no evidence to date that odor can regulate the processing of emotional words conveyed by visual words. An emotional word recognition task was combined with event-related potential technology. Briefly, 49 adults were randomly divided into three odor contexts (pleasant odor, unpleasant odor, and no odor) to judge the valence of emotional words (positive, negative, and neutral). Both behavioral and Electroencephalography (EEG) data were collected. Both the pleasant odor and unpleasant odor contexts shortened the response time of the subjects to emotional words. In addition, negative words induced greater amplitudes of early posterior negativity (EPN) and late positive potential (LPP) than the positive and neutral words. However, the neutral words induced a larger N400 amplitude than the positive and negative words. More importantly, the processing of emotional words was found to be modulated by external odor contexts. For example, during the earlier (P2) processing stages, pleasant and unpleasant odor contexts induced greater P2 amplitudes than the no odor context. In the unpleasant odor context, negative words with the same odor valence induced greater P2 amplitudes than the positive words. During the later (N400) stages, various regions of the brain regions exhibited different results. For example, in the left and right frontal areas of the brain, exposure to positive words in a pleasant odor context resulted in a smaller N400 amplitude than exposure to neutral words in the same context. Meanwhile, in the left and right central regions, emotional words with the same valence as pleasant or unpleasant odor contexts elicited the minimum N400 amplitude. Individuals are very sensitive to emotional information. With deeper processing, different cognitive processes are reflected and they can be modulated by external odors. In the early and late stages of word processing, both pleasant and unpleasant odor contexts exhibited an undifferentiated dominance effect and could specifically modulate affectively congruent words.  相似文献   

6.
To determine when and how L2 learners start to process L2 words affectively and semantically, we conducted a longitudinal study on their interaction in adult L2 learners. In four test sessions, spanning half a year of L2 learning, we monitored behavioral and ERP learning-related changes for one and the same set of words by means of a primed lexical-decision paradigm with L1 primes and L2 targets. Sensitivity rates, accuracy rates, RTs, and N400 amplitude to L2 words and pseudowords improved significantly across sessions. A semantic priming effect (e.g, prime “driver”facilitating response to target “street”) was found in accuracy rates and RTs when collapsing Sessions 1 to 4, while this effect modulated ERP amplitudes within the first 300 ms of L2 target processing. An overall affective priming effect (e.g., “sweet” facilitating”taste”) was also found in RTs and ERPs (posterior P1). Importantly, the ERPs showed an L2 valence effect across sessions (e.g., positive words were easier to process than neutral words), indicating that L2 learners were sensitive to L2 affective meaning. Semantic and affective priming interacted in the N400 time-window only in Session 4, implying that they affected meaning integration during L2 immersion together. The results suggest that L1 and L2 are initially processed semantically and affectively via relatively separate channels that are more and more linked contingent on L2 exposure.  相似文献   

7.
Hoekert M  Bais L  Kahn RS  Aleman A 《PloS one》2008,3(5):e2244
In verbal communication, not only the meaning of the words convey information, but also the tone of voice (prosody) conveys crucial information about the emotional state and intentions of others. In various studies right frontal and right temporal regions have been found to play a role in emotional prosody perception. Here, we used triple-pulse repetitive transcranial magnetic stimulation (rTMS) to shed light on the precise time course of involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum. We hypothesized that information would be processed in the right anterior superior temporal gyrus before being processed in the right fronto-parietal operculum. Right-handed healthy subjects performed an emotional prosody task. During listening to each sentence a triplet of TMS pulses was applied to one of the regions at one of six time points (400-1900 ms). Results showed a significant main effect of Time for right anterior superior temporal gyrus and right fronto-parietal operculum. The largest interference was observed half-way through the sentence. This effect was stronger for withdrawal emotions than for the approach emotion. A further experiment with the inclusion of an active control condition, TMS over the EEG site POz (midline parietal-occipital junction), revealed stronger effects at the fronto-parietal operculum and anterior superior temporal gyrus relative to the active control condition. No evidence was found for sequential processing of emotional prosodic information from right anterior superior temporal gyrus to the right fronto-parietal operculum, but the results revealed more parallel processing. Our results suggest that both right fronto-parietal operculum and right anterior superior temporal gyrus are critical for emotional prosody perception at a relatively late time period after sentence onset. This may reflect that emotional cues can still be ambiguous at the beginning of sentences, but become more apparent half-way through the sentence.  相似文献   

8.
Forty-four right-handed volunteers were invited to listen to Italian 5-letter words of different kinds, including non-words, digitally recorded. Signals from 16 electrodes were averaged and displayed both as traces and maps. When the same word was monotonously delivered to the subject, a positive component at 340 ms was recorded following the N100–P200 complex. This potential was automatic, phonologically driven, independent of habituation, specific for verbal material and lateralized to the left side. By contrast, semantic tasks evoked bilateral N400, by using the oddball paradigm with different kinds of target stimuli, including non-words. The N400 duration was related to the task complexity. The late positive component was locked-in-time with the end of the words. Therefore, N400 reached its peak before the word completion. At that time the probability of recognition was 60%, progressively reaching 100% at the time of the late positive component. Intra- and interindividual variance was low. The findings indicates two different language processings: one is confined to the perisylvian regions of the left hemisphere in right-handed subjects and appears earlier, reflecting phonological processing, whereas the other one is bilateral and takes places when semantic judgments are going on. Event-related potentials during language processing appear to be a very useful tool, especially when EEG maps are displayed, giving us the information on both temporal and spatial events.  相似文献   

9.
The initial process of identifying words from spoken language and the detection of more subtle regularities underlying their structure are mandatory processes for language acquisition. Little is known about the cognitive mechanisms that allow us to extract these two types of information and their specific time-course of acquisition following initial contact with a new language. We report time-related electrophysiological changes that occurred while participants learned an artificial language. These changes strongly correlated with the discovery of the structural rules embedded in the words. These changes were clearly different from those related to word learning and occurred during the first minutes of exposition. There is a functional distinction in the nature of the electrophysiological signals during acquisition: an increase in negativity (N400) in the central electrodes is related to word-learning and development of a frontal positivity (P2) is related to rule-learning. In addition, the results of an online implicit and a post-learning test indicate that, once the rules of the language have been acquired, new words following the rule are processed as words of the language. By contrast, new words violating the rule induce syntax-related electrophysiological responses when inserted online in the stream (an early frontal negativity followed by a late posterior positivity) and clear lexical effects when presented in isolation (N400 modulation). The present study provides direct evidence suggesting that the mechanisms to extract words and structural dependencies from continuous speech are functionally segregated. When these mechanisms are engaged, the electrophysiological marker associated with rule-learning appears very quickly, during the earliest phases of exposition to a new language.  相似文献   

10.
How pictures and words are stored and processed in the human brain constitute a long-standing question in cognitive psychology. Behavioral studies have yielded a large amount of data addressing this issue. Generally speaking, these data show that there are some interactions between the semantic processing of pictures and words. However, behavioral methods can provide only limited insight into certain findings. Fortunately, Event-Related Potential (ERP) provides on-line cues about the temporal nature of cognitive processes and contributes to the exploration of their neural substrates. ERPs have been used in order to better understand semantic processing of words and pictures. The main objective of this article is to offer an overview of the electrophysiologic bases of semantic processing of words and pictures. Studies presented in this article showed that the processing of words is associated with an N 400 component, whereas pictures elicited both N 300 and N 400 components. Topographical analysis of the N 400 distribution over the scalp is compatible with the idea that both image-mediated concrete words and pictures access an amodal semantic system. However, given the distinctive N 300 patterns, observed only during picture processing, it appears that picture and word processing rely upon distinct neuronal networks, even if they end up activating more or less similar semantic representations.  相似文献   

11.
This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no 'magic moment' when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.  相似文献   

12.
The processing of notes and chords which are harmonically incongruous with their context has been shown to elicit two distinct late ERP effects. These effects strongly resemble two effects associated with the processing of linguistic incongruities: a P600, resembling a typical response to syntactic incongruities in language, and an N500, evocative of the N400, which is typically elicited in response to semantic incongruities in language. Despite the robustness of these two patterns in the musical incongruity literature, no consensus has yet been reached as to the reasons for the existence of two distinct responses to harmonic incongruities. This study was the first to use behavioural and ERP data to test two possible explanations for the existence of these two patterns: the musicianship of listeners, and the resolved or unresolved nature of the harmonic incongruities. Results showed that harmonically incongruous notes and chords elicited a late positivity similar to the P600 when they were embedded within sequences which started and ended in the same key (harmonically resolved). The notes and chords which indicated that there would be no return to the original key (leaving the piece harmonically unresolved) were associated with a further P600 in musicians, but with a negativity resembling the N500 in non-musicians. We suggest that the late positivity reflects the conscious perception of a specific element as being incongruous with its context and the efforts of musicians to integrate the harmonic incongruity into its local context as a result of their analytic listening style, while the late negativity reflects the detection of the absence of resolution in non-musicians as a result of their holistic listening style.  相似文献   

13.
There is some evidence for a role of music training in boosting phonological awareness, word segmentation, working memory, as well as reading abilities in children with typical development. Poor performance in tasks requiring temporal processing, rhythm perception and sensorimotor synchronization seems to be a crucial factor underlying dyslexia in children. Interestingly, children with dyslexia show deficits in temporal processing, both in language and in music. Within this framework, we test the hypothesis that music training, by improving temporal processing and rhythm abilities, improves phonological awareness and reading skills in children with dyslexia. The study is a prospective, multicenter, open randomized controlled trial, consisting of test, rehabilitation and re-test (ID NCT02316873). After rehabilitation, the music group (N = 24) performed better than the control group (N = 22) in tasks assessing rhythmic abilities, phonological awareness and reading skills. This is the first randomized control trial testing the effect of music training in enhancing phonological and reading abilities in children with dyslexia. The findings show that music training can modify reading and phonological abilities even when these skills are severely impaired. Through the enhancement of temporal processing and rhythmic skills, music might become an important tool in both remediation and early intervention programs.

Trial Registration

ClinicalTrials.gov NCT02316873  相似文献   

14.
Lexical decision task in an event-related potential experiment was used in order to determine the organization of mental lexicon regarding the polimorphemic words: are they stored as unanalyzable items or as separate morphemes? The results indicate the later: while monomorphemic words elicit N400 component, usually related to lexical-semantic processing, prefixed words and prefixed pseudo-words elicit left anterior negativity (LAN), usually related to grammatical (morphosyntactic) processes. These components indicate that the speakers apply grammatical (i.e., word-formation) rules and combine morphemes in order to obtain lexical meaning of the prefixed word.  相似文献   

15.
The effects of spatial selective attention upon ERPs associated with the processing of word stimuli were investigated. While subjects maintained central eye fixation, ERPs were recorded to words presented to the left and right visual fields. In each of 6 runs, subjects focussed attention to alternate fields to perform a category-detection task. Pairs of semantically related and repeated words were embedded in the word lists presented to the attended and unattended visual fields. Consistent with prior studies, the P1-N1 visual ERP was larger when elicited by words in attended spatial locations. A large negative slow wave identified as N400 was elicited by attended, but not unattended, words. For attended words, N400 was smaller for semantically primed or repeated words. We concluded that spatial selective attention can modulate the degree to which words are processed, and that the cognitive processes associated with N400 are not automatic.  相似文献   

16.
C Jiang  JP Hamm  VK Lim  IJ Kirk  X Chen  Y Yang 《PloS one》2012,7(7):e41411
Pitch processing is a critical ability on which humans' tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources.  相似文献   

17.
Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing.  相似文献   

18.
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.  相似文献   

19.
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.  相似文献   

20.
Sound symbolism is the systematic and non-arbitrary link between word and meaning. Although a number of behavioral studies demonstrate that both children and adults are universally sensitive to sound symbolism in mimetic words, the neural mechanisms underlying this phenomenon have not yet been extensively investigated. The present study used functional magnetic resonance imaging to investigate how Japanese mimetic words are processed in the brain. In Experiment 1, we compared processing for motion mimetic words with that for non-sound symbolic motion verbs and adverbs. Mimetic words uniquely activated the right posterior superior temporal sulcus (STS). In Experiment 2, we further examined the generalizability of the findings from Experiment 1 by testing another domain: shape mimetics. Our results show that the right posterior STS was active when subjects processed both motion and shape mimetic words, thus suggesting that this area may be the primary structure for processing sound symbolism. Increased activity in the right posterior STS may also reflect how sound symbolic words function as both linguistic and non-linguistic iconic symbols.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号