首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.

Background

Studies demonstrating the involvement of motor brain structures in language processing typically focus on time windows beyond the latencies of lexical-semantic access. Consequently, such studies remain inconclusive regarding whether motor brain structures are recruited directly in language processing or through post-linguistic conceptual imagery. In the present study, we introduce a grip-force sensor that allows online measurements of language-induced motor activity during sentence listening. We use this tool to investigate whether language-induced motor activity remains constant or is modulated in negative, as opposed to affirmative, linguistic contexts.

Methodology/Principal Findings

Participants listened to spoken action target words in either affirmative or negative sentences while holding a sensor in a precision grip. The participants were asked to count the sentences containing the name of a country to ensure attention. The grip force signal was recorded continuously. The action words elicited an automatic and significant enhancement of the grip force starting at approximately 300 ms after target word onset in affirmative sentences; however, no comparable grip force modulation was observed when these action words occurred in negative contexts.

Conclusions/Significance

Our findings demonstrate that this simple experimental paradigm can be used to study the online crosstalk between language and the motor systems in an ecological and economical manner. Our data further confirm that the motor brain structures that can be called upon during action word processing are not mandatorily involved; the crosstalk is asymmetrically governed by the linguistic context and not vice versa.  相似文献   

3.
Current theoretical positions assume that action-related word meanings are established by functional connections between perisylvian language areas and the motor cortex (MC) according to Hebb's associative learning principle. To test this assumption, we probed the functional relevance of the left MC for learning of a novel action word vocabulary by disturbing neural plasticity in the MC with transcranial direct current stimulation (tDCS). In combination with tDCS, subjects learned a novel vocabulary of 76 concrete, body-related actions by means of an associative learning paradigm. Compared with a control condition with "sham" stimulation, cathodal tDCS reduced success rates in vocabulary acquisition, as shown by tests of novel action word translation into the native language. The analysis of learning behavior revealed a specific effect of cathodal tDCS on the ability to associatively couple actions with novel words. In contrast, we did not find these effects in control experiments, when tDCS was applied to the prefrontal cortex or when subjects learned object-related words. The present study lends direct evidence to the proposition that the left MC is causally involved in the acquisition of novel action-related words.  相似文献   

4.
Sound symbolism is the systematic and non-arbitrary link between word and meaning. Although a number of behavioral studies demonstrate that both children and adults are universally sensitive to sound symbolism in mimetic words, the neural mechanisms underlying this phenomenon have not yet been extensively investigated. The present study used functional magnetic resonance imaging to investigate how Japanese mimetic words are processed in the brain. In Experiment 1, we compared processing for motion mimetic words with that for non-sound symbolic motion verbs and adverbs. Mimetic words uniquely activated the right posterior superior temporal sulcus (STS). In Experiment 2, we further examined the generalizability of the findings from Experiment 1 by testing another domain: shape mimetics. Our results show that the right posterior STS was active when subjects processed both motion and shape mimetic words, thus suggesting that this area may be the primary structure for processing sound symbolism. Increased activity in the right posterior STS may also reflect how sound symbolic words function as both linguistic and non-linguistic iconic symbols.  相似文献   

5.
Despite the clear importance of language in our life, our vital ability to quickly and effectively learn new words and meanings is neurobiologically poorly understood. Conventional knowledge maintains that language learning—especially in adulthood—is slow and laborious. Furthermore, its structural basis remains unclear. Even though behavioural manifestations of learning are evident near instantly, previous neuroimaging work across a range of semantic categories has largely studied neural changes associated with months or years of practice. Here, we address rapid neuroanatomical plasticity accompanying new lexicon acquisition, specifically focussing on the learning of action-related language, which has been linked to the brain’s motor systems. Our results show that it is possible to measure and to externally modulate (using transcranial magnetic stimulation (TMS) of motor cortex) cortical microanatomic reorganisation after mere minutes of new word learning. Learning-induced microstructural changes, as measured by diffusion kurtosis imaging (DKI) and machine learning-based analysis, were evident in prefrontal, temporal, and parietal neocortical sites, likely reflecting integrative lexico-semantic processing and formation of new memory circuits immediately during the learning tasks. These results suggest a structural basis for the rapid neocortical word encoding mechanism and reveal the causally interactive relationship of modal and associative brain regions in supporting learning and word acquisition.

This combined neuroimaging and brain stimulation study reveals rapid and distributed microstructural plasticity after a single immersive language learning session, demonstrating the causal relevance of the motor cortex in encoding the meaning of novel action words.  相似文献   

6.
Embodied/modality-specific theories of semantic memory propose that sensorimotor representations play an important role in perception and action. A large body of evidence supports the notion that concepts involving human motor action (i.e., semantic-motor representations) are processed in both language and motor regions of the brain. However, most studies have focused on perceptual tasks, leaving unanswered questions about language-motor interaction during production tasks. Thus, we investigated the effects of shared semantic-motor representations on concurrent language and motor production tasks in healthy young adults, manipulating the semantic task (motor-related vs. nonmotor-related words) and the motor task (i.e., standing still and finger-tapping). In Experiment 1 (n = 20), we demonstrated that motor-related word generation was sufficient to affect postural control. In Experiment 2 (n = 40), we demonstrated that motor-related word generation was sufficient to facilitate word generation and finger tapping. We conclude that engaging semantic-motor representations can have a reciprocal influence on motor and language production. Our study provides additional support for functional language-motor interaction, as well as embodied/modality-specific theories.  相似文献   

7.
Recent evidence has shown that processing action-related language and motor action share common neural representations to a point that the two processes can interfere when performed concurrently. To support the assumption that language-induced motor activity contributes to action word understanding, the present study aimed at ruling out that this activity results from mental imagery of the movements depicted by the words. For this purpose, we examined cross-talk between action word processing and an arm reaching movement, using words that were presented too fast to be consciously perceived (subliminally). Encephalogram (EEG) and movement kinematics were recorded. EEG recordings of the "Readiness potential" ("RP", indicator of motor preparation) revealed that subliminal displays of action verbs during movement preparation reduced the RP and affected the subsequent reaching movement. The finding that motor processes were modulated by language processes despite the fact that words were not consciously perceived, suggests that cortical structures that serve the preparation and execution of motor actions are indeed part of the (action) language processing network.  相似文献   

8.
Numerous previous neuroimaging studies suggest an involvement of cortical motor areas not only in action execution but also in action recognition and understanding. Motor areas of the human brain have also been found to activate during the processing of written and spoken action-related words and sentences. Even more strikingly, stimuli referring to different bodily effectors produced specific somatotopic activation patterns in the motor areas. However, metabolic neuroimaging results can be ambiguous with respect to the processing stage they reflect. This is a serious limitation when hypotheses concerning linguistic processes are tested, since in this case it is usually crucial to distinguish early lexico-semantic processing from strategic effects or mental imagery that may follow lexico-semantic information access. Timing information is therefore pivotal to determine the functional significance of motor areas in action recognition and action-word comprehension. Here, we review attempts to reveal the time course of these processes using neurophysiological methods (EEG, MEG and TMS), in visual and auditory domains. We will highlight the importance of the choice of appropriate paradigms in combination with the corresponding method for the extraction of timing information. The findings will be discussed in the general context of putative brain mechanisms of word and object recognition.  相似文献   

9.
The involvement of the sensorimotor system in language understanding has been widely demonstrated. However, the role of context in these studies has only recently started to be addressed. Though words are bearers of a semantic potential, meaning is the product of a pragmatic process. It needs to be situated in a context to be disambiguated. The aim of this study was to test the hypothesis that embodied simulation occurring during linguistic processing is contextually modulated to the extent that the same sentence, depending on the context of utterance, leads to the activation of different effector-specific brain motor areas. In order to test this hypothesis, we asked subjects to give a motor response with the hand or the foot to the presentation of ambiguous idioms containing action-related words when these are preceded by context sentences. The results directly support our hypothesis only in relation to the comprehension of hand-related action sentences.  相似文献   

10.
Complementary systems for understanding action intentions   总被引:7,自引:0,他引:7  
How humans understand the intention of others' actions remains controversial. Some authors have suggested that intentions are recognized by means of a motor simulation of the observed action with the mirror-neuron system [1-3]. Others emphasize that intention recognition is an inferential process, often called "mentalizing" or employing a "theory of mind," which activates areas well outside the motor system [4-6]. Here, we assessed the contribution of brain regions involved in motor simulation and mentalizing for understanding action intentions via functional brain imaging. Results show that the inferior frontal gyrus (part of the mirror-neuron system) processes the intentionality of an observed action on the basis of the visual properties of the action, irrespective of whether the subject paid attention to the intention or not. Conversely, brain areas that are part of a "mentalizing" network become active when subjects reflect about the intentionality of an observed action, but they are largely insensitive to the visual properties of the observed action. This supports the hypothesis that motor simulation and mentalizing have distinct but complementary functions for the recognition of others' intentions.  相似文献   

11.

Objectives

Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern.

Experimental design

Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition.

Principal findings

The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca''s area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects.

Conclusions

Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.  相似文献   

12.
Humans can recognize spoken words with unmatched speed and accuracy. Hearing the initial portion of a word such as "formu…" is sufficient for the brain to identify "formula" from the thousands of other words that partially match. Two alternative computational accounts propose that partially matching words (1) inhibit each other until a single word is selected ("formula" inhibits "formal" by lexical competition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error is minimal after sequences like "formu…"). To distinguish these theories we taught participants novel words (e.g., "formubo") that sound like existing words ("formula") on two successive days. Computational simulations show that knowing "formubo" increases lexical competition when hearing "formu…", but reduces segment prediction error. Conversely, when the sounds in "formula" and "formubo" diverge, the reverse is observed. The time course of magnetoencephalographic brain responses in the superior temporal gyrus (STG) is uniquely consistent with a segment prediction account. We propose a predictive coding model of spoken word recognition in which STG neurons represent the difference between predicted and heard speech sounds. This prediction error signal explains the efficiency of human word recognition and simulates neural responses in auditory regions.  相似文献   

13.
A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition.  相似文献   

14.
Cognitive science has a rich history of interest in the ways that languages represent abstract and concrete concepts (e.g., idea vs. dog). Until recently, this focus has centered largely on aspects of word meaning and semantic representation. However, recent corpora analyses have demonstrated that abstract and concrete words are also marked by phonological, orthographic, and morphological differences. These regularities in sound-meaning correspondence potentially allow listeners to infer certain aspects of semantics directly from word form. We investigated this relationship between form and meaning in a series of four experiments. In Experiments 1-2 we examined the role of metalinguistic knowledge in semantic decision by asking participants to make semantic judgments for aurally presented nonwords selectively varied by specific acoustic and phonetic parameters. Participants consistently associated increased word length and diminished wordlikeness with abstract concepts. In Experiment 3, participants completed a semantic decision task (i.e., abstract or concrete) for real words varied by length and concreteness. Participants were more likely to misclassify longer, inflected words (e.g., "apartment") as abstract and shorter uninflected abstract words (e.g., "fate") as concrete. In Experiment 4, we used a multiple regression to predict trial level naming data from a large corpus of nouns which revealed significant interaction effects between concreteness and word form. Together these results provide converging evidence for the hypothesis that listeners map sound to meaning through a non-arbitrary process using prior knowledge about statistical regularities in the surface forms of words.  相似文献   

15.
Despite a growing number of studies, the neurophysiology of adult vocabulary acquisition is still poorly understood. One reason is that paradigms that can easily be combined with neuroscientfic methods are rare. Here, we tested the efficiency of two paradigms for vocabulary (re-) acquisition, and compared the learning of novel words for actions and objects. Cortical networks involved in adult native-language word processing are widespread, with differences postulated between words for objects and actions. Words and what they stand for are supposed to be grounded in perceptual and sensorimotor brain circuits depending on their meaning. If there are specific brain representations for different word categories, we hypothesized behavioural differences in the learning of action-related and object-related words. Paradigm A, with the learning of novel words for body-related actions spread out over a number of days, revealed fast learning of these new action words, and stable retention up to 4 weeks after training. The single-session Paradigm B employed objects and actions. Performance during acquisition did not differ between action-related and object-related words (time*word category: p?=?0.01), but the translation rate was clearly better for object-related (79%) than for action-related words (53%, p?=?0.002). Both paradigms yielded robust associative learning of novel action-related words, as previously demonstrated for object-related words. Translation success differed for action- and object-related words, which may indicate different neural mechanisms. The paradigms tested here are well suited to investigate such differences with neuroscientific means. Given the stable retention and minimal requirements for conscious effort, these learning paradigms are promising for vocabulary re-learning in brain-lesioned people. In combination with neuroimaging, neuro-stimulation or pharmacological intervention, they may well advance the understanding of language learning to optimize therapeutic strategies.  相似文献   

16.
The notion of the phase structure of the speech act—or to be more precise—the special structure of the "inner speech" stage in utterance production, belongs to L. S. Vygotsky. Vygotsky conceptualized the process of speech production, the progress from thought to word to external speech, as follows: "from the motive that engenders a thought, to the formulation of that thought, its mediation by the inner word, and then by the meanings of external words, and finally, by words themselves"1 Elsewhere he said, "Thought is an internally mediated process. It moves from a vague desire to the mediated formulation of meaning, or rather, not the formulation, but the fulfillment of the thought in the word." And finally, "Thought is not something ready-made that needs to be expressed. Thought strives to fulfill some function or goal. This is achieved by moving from the sensation of a task—through construction of meaning—to the elaboration of the thought itself."2  相似文献   

17.

Background

It is well established that the left inferior frontal gyrus plays a key role in the cerebral cortical network that supports reading and visual word recognition. Less clear is when in time this contribution begins. We used magnetoencephalography (MEG), which has both good spatial and excellent temporal resolution, to address this question.

Methodology/Principal Findings

MEG data were recorded during a passive viewing paradigm, chosen to emphasize the stimulus-driven component of the cortical response, in which right-handed participants were presented words, consonant strings, and unfamiliar faces to central vision. Time-frequency analyses showed a left-lateralized inferior frontal gyrus (pars opercularis) response to words between 100–250 ms in the beta frequency band that was significantly stronger than the response to consonant strings or faces. The left inferior frontal gyrus response to words peaked at ∼130 ms. This response was significantly later in time than the left middle occipital gyrus, which peaked at ∼115 ms, but not significantly different from the peak response in the left mid fusiform gyrus, which peaked at ∼140 ms, at a location coincident with the fMRI–defined visual word form area (VWFA). Significant responses were also detected to words in other parts of the reading network, including the anterior middle temporal gyrus, the left posterior middle temporal gyrus, the angular and supramarginal gyri, and the left superior temporal gyrus.

Conclusions/Significance

These findings suggest very early interactions between the vision and language domains during visual word recognition, with speech motor areas being activated at the same time as the orthographic word-form is being resolved within the fusiform gyrus. This challenges the conventional view of a temporally serial processing sequence for visual word recognition in which letter forms are initially decoded, interact with their phonological and semantic representations, and only then gain access to a speech code.  相似文献   

18.
de Lafuente V  Romo R 《Neuron》2004,41(2):178-180
A new exploration of the cortical network underlying our language abilities by Hauk et al., in this issue of Neuron, shows that the process of giving meaning to words differentially activates the motor cortex according to the semantic category of the word.  相似文献   

19.
Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.  相似文献   

20.
Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could occur in a distributional mode, where information is gleaned from the distributional statistics (word co-occurrence etc.) of natural language. Such statistics are relevant to meaning because of the Distributional Principle that ‘words of similar meaning tend to occur in similar contexts’. Computational systems, such as Latent Semantic Analysis, have substantiated the viability of distributional learning of word meaning, by showing that semantic similarities between words can be accurately estimated from analysis of the distributional statistics of a natural language corpus. We consider whether appearance similarities can also be learnt in a distributional mode. As grounds for such a mode we advance the Appearance Hypothesis that ‘words with referents of similar appearance tend to occur in similar contexts’. We assess the viability of such learning by looking at the performance of a computer system that interpolates, on the basis of distributional and appearance similarity, from words that it has been explicitly taught the appearance of, in order to identify and name objects that it has not been taught about. Our experiment tests with a set of 660 simple concrete noun words. Appearance information on words is modelled using sets of images of examples of the word. Distributional similarity is computed from a standard natural language corpus. Our computation results support the viability of distributional learning of appearance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号