首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Embodied/modality-specific theories of semantic memory propose that sensorimotor representations play an important role in perception and action. A large body of evidence supports the notion that concepts involving human motor action (i.e., semantic-motor representations) are processed in both language and motor regions of the brain. However, most studies have focused on perceptual tasks, leaving unanswered questions about language-motor interaction during production tasks. Thus, we investigated the effects of shared semantic-motor representations on concurrent language and motor production tasks in healthy young adults, manipulating the semantic task (motor-related vs. nonmotor-related words) and the motor task (i.e., standing still and finger-tapping). In Experiment 1 (n = 20), we demonstrated that motor-related word generation was sufficient to affect postural control. In Experiment 2 (n = 40), we demonstrated that motor-related word generation was sufficient to facilitate word generation and finger tapping. We conclude that engaging semantic-motor representations can have a reciprocal influence on motor and language production. Our study provides additional support for functional language-motor interaction, as well as embodied/modality-specific theories.  相似文献   

2.

Background

Studies demonstrating the involvement of motor brain structures in language processing typically focus on time windows beyond the latencies of lexical-semantic access. Consequently, such studies remain inconclusive regarding whether motor brain structures are recruited directly in language processing or through post-linguistic conceptual imagery. In the present study, we introduce a grip-force sensor that allows online measurements of language-induced motor activity during sentence listening. We use this tool to investigate whether language-induced motor activity remains constant or is modulated in negative, as opposed to affirmative, linguistic contexts.

Methodology/Principal Findings

Participants listened to spoken action target words in either affirmative or negative sentences while holding a sensor in a precision grip. The participants were asked to count the sentences containing the name of a country to ensure attention. The grip force signal was recorded continuously. The action words elicited an automatic and significant enhancement of the grip force starting at approximately 300 ms after target word onset in affirmative sentences; however, no comparable grip force modulation was observed when these action words occurred in negative contexts.

Conclusions/Significance

Our findings demonstrate that this simple experimental paradigm can be used to study the online crosstalk between language and the motor systems in an ecological and economical manner. Our data further confirm that the motor brain structures that can be called upon during action word processing are not mandatorily involved; the crosstalk is asymmetrically governed by the linguistic context and not vice versa.  相似文献   

3.
Motor actions and action verbs activate similar cortical brain regions. A functional interference can be taken as evidence that there is a parallel treatment of these two types of information and would argue for the biological grounding of language in action. A novel approach examining the relationship between language and grip force is presented. With eyes closed and arm extended, subjects listened to words relating (verbs) or not relating (nouns) to a manual action while holding a cylinder with an integrated force sensor. There was a change in grip force when subjects heard verbs that related to manual action. Grip force increased from about 100 ms following the verb presentation, peaked at 380 ms and fell abruptly after 400 ms, signalling a possible inhibition of the motor simulation evoked by these words. These observations reveal the intimate relationship that exists between language and grasp and show that it is possible to elucidate online new aspects of sensorimotor interaction.  相似文献   

4.
Recent evidence has shown that processing action-related language and motor action share common neural representations to a point that the two processes can interfere when performed concurrently. To support the assumption that language-induced motor activity contributes to action word understanding, the present study aimed at ruling out that this activity results from mental imagery of the movements depicted by the words. For this purpose, we examined cross-talk between action word processing and an arm reaching movement, using words that were presented too fast to be consciously perceived (subliminally). Encephalogram (EEG) and movement kinematics were recorded. EEG recordings of the "Readiness potential" ("RP", indicator of motor preparation) revealed that subliminal displays of action verbs during movement preparation reduced the RP and affected the subsequent reaching movement. The finding that motor processes were modulated by language processes despite the fact that words were not consciously perceived, suggests that cortical structures that serve the preparation and execution of motor actions are indeed part of the (action) language processing network.  相似文献   

5.
Cognitive science has a rich history of interest in the ways that languages represent abstract and concrete concepts (e.g., idea vs. dog). Until recently, this focus has centered largely on aspects of word meaning and semantic representation. However, recent corpora analyses have demonstrated that abstract and concrete words are also marked by phonological, orthographic, and morphological differences. These regularities in sound-meaning correspondence potentially allow listeners to infer certain aspects of semantics directly from word form. We investigated this relationship between form and meaning in a series of four experiments. In Experiments 1-2 we examined the role of metalinguistic knowledge in semantic decision by asking participants to make semantic judgments for aurally presented nonwords selectively varied by specific acoustic and phonetic parameters. Participants consistently associated increased word length and diminished wordlikeness with abstract concepts. In Experiment 3, participants completed a semantic decision task (i.e., abstract or concrete) for real words varied by length and concreteness. Participants were more likely to misclassify longer, inflected words (e.g., "apartment") as abstract and shorter uninflected abstract words (e.g., "fate") as concrete. In Experiment 4, we used a multiple regression to predict trial level naming data from a large corpus of nouns which revealed significant interaction effects between concreteness and word form. Together these results provide converging evidence for the hypothesis that listeners map sound to meaning through a non-arbitrary process using prior knowledge about statistical regularities in the surface forms of words.  相似文献   

6.
The study of the production of co-speech gestures (CSGs), i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15–25 Hz), which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50–90 Hz), which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.  相似文献   

7.
The ability to anticipate others'' actions is crucial for social interaction. It has been shown that this ability relies on motor areas of the human brain that are not only active during action execution and action observation, but also during anticipation of another person''s action. Recording electroencephalograms during a triadic social interaction, we assessed whether activation of motor areas pertaining to the human mirror-neuron system prior to action observation depends on the social relationship between the actor and the observer. Anticipatory motor activation was stronger when participants expected an interaction partner to perform a particular action than when they anticipated that the same action would be performed by a third person they did not interact with. These results demonstrate that social interaction modulates action simulation.  相似文献   

8.
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans'' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot''s greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.  相似文献   

9.
10.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.  相似文献   

11.
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.  相似文献   

12.
The role of embodied mechanisms in processing sentences endowed with a first person perspective is now widely accepted. However, whether embodied sentence processing within a third person perspective would also have motor behavioral significance remains unknown. Here, we developed a novel version of the Action-sentence Compatibility Effect (ACE) in which participants were asked to perform a movement compatible or not with the direction embedded in a sentence having a first person (Experiment 1: You gave a pizza to Louis) or third person perspective (Experiment 2: Lea gave a pizza to Louis). Results indicate that shifting perspective from first to third person was sufficient to prevent motor embodied mechanisms, abolishing the ACE. Critically, ACE was restored in Experiment 3 by adding a virtual "body" that allowed participants to know "where" to put themselves in space when taking the third person perspective, thus demonstrating that motor embodied processes are space-dependent. A fourth, control experiment, by dissociating motor response from the transfer verb's direction, supported the conclusion that perspective-taking may induce significant ACE only when coupled with the adequate sentence-response mapping.  相似文献   

13.
The embodied cognition hypothesis suggests that motor and premotor areas are automatically and necessarily involved in understanding action language, as word conceptual representations are embodied. This transcranial magnetic stimulation (TMS) study explores the role of the left primary motor cortex in action-verb processing. TMS-induced motor-evoked potentials from right-hand muscles were recorded as a measure of M1 activity, while participants were asked either to judge explicitly whether a verb was action-related (semantic task) or to decide on the number of syllables in a verb (syllabic task). TMS was applied in three different experiments at 170, 350 and 500 ms post-stimulus during both tasks to identify when the enhancement of M1 activity occurred during word processing. The delays between stimulus onset and magnetic stimulation were consistent with electrophysiological studies, suggesting that word recognition can be differentiated into early (within 200 ms) and late (within 400 ms) lexical-semantic stages, and post-conceptual stages. Reaction times and accuracy were recorded to measure the extent to which the participants'' linguistic performance was affected by the interference of TMS with M1 activity. No enhancement of M1 activity specific for action verbs was found at 170 and 350 ms post-stimulus, when lexical-semantic processes are presumed to occur (Experiments 1–2). When TMS was applied at 500 ms post-stimulus (Experiment 3), processing action verbs, compared with non-action verbs, increased the M1-activity in the semantic task and decreased it in the syllabic task. This effect was specific for hand-action verbs and was not observed for action-verbs related to other body parts. Neither accuracy nor RTs were affected by TMS. These findings suggest that the lexical-semantic processing of action verbs does not automatically activate the M1. This area seems to be rather involved in post-conceptual processing that follows the retrieval of motor representations, its activity being modulated (facilitated or inhibited), in a top-down manner, by the specific demand of the task.  相似文献   

14.
While embodied approaches of cognition have proved to be successful in explaining concrete concepts and words, they have more difficulties in accounting for abstract concepts and words, and several proposals have been put forward. This work aims to test the Words As Tools proposal, according to which both abstract and concrete concepts are grounded in perception, action and emotional systems, but linguistic information is more important for abstract than for concrete concept representation, due to the different ways they are acquired: while for the acquisition of the latter linguistic information might play a role, for the acquisition of the former it is instead crucial. We investigated the acquisition of concrete and abstract concepts and words, and verified its impact on conceptual representation. In Experiment 1, participants explored and categorized novel concrete and abstract entities, and were taught a novel label for each category. Later they performed a categorical recognition task and an image-word matching task to verify a) whether and how the introduction of language changed the previously formed categories, b) whether language had a major weight for abstract than for concrete words representation, and c) whether this difference had consequences on bodily responses. The results confirm that, even though both concrete and abstract concepts are grounded, language facilitates the acquisition of the latter and plays a major role in their representation, resulting in faster responses with the mouth, typically associated with language production. Experiment 2 was a rating test aiming to verify whether the findings of Experiment 1 were simply due to heterogeneity, i.e. to the fact that the members of abstract categories were more heterogeneous than those of concrete categories. The results confirmed the effectiveness of our operationalization, showing that abstract concepts are more associated with the mouth and concrete ones with the hand, independently from heterogeneity.  相似文献   

15.

Background

Behavioral studies have provided evidence for an action–sentence compatibility effect (ACE) that suggests a coupling of motor mechanisms and action-sentence comprehension. When both processes are concurrent, the action sentence primes the actual movement, and simultaneously, the action affects comprehension. The aim of the present study was to investigate brain markers of bidirectional impact of language comprehension and motor processes.

Methodology/Principal Findings

Participants listened to sentences describing an action that involved an open hand, a closed hand, or no manual action. Each participant was asked to press a button to indicate his/her understanding of the sentence. Each participant was assigned a hand-shape, either closed or open, which had to be used to activate the button. There were two groups (depending on the assigned hand-shape) and three categories (compatible, incompatible and neutral) defined according to the compatibility between the response and the sentence. ACEs were found in both groups. Brain markers of semantic processing exhibited an N400-like component around the Cz electrode position. This component distinguishes between compatible and incompatible, with a greater negative deflection for incompatible. Motor response elicited a motor potential (MP) and a re-afferent potential (RAP), which are both enhanced in the compatible condition.

Conclusions/Significance

The present findings provide the first ACE cortical measurements of semantic processing and the motor response. N400-like effects suggest that incompatibility with motor processes interferes in sentence comprehension in a semantic fashion. Modulation of motor potentials (MP and RAP) revealed a multimodal semantic facilitation of the motor response. Both results provide neural evidence of an action-sentence bidirectional relationship. Our results suggest that ACE is not an epiphenomenal post-sentence comprehension process. In contrast, motor-language integration occurring during the verb onset supports a genuine and ongoing brain motor-language interaction.  相似文献   

16.
We review the evidence that an ability to achieve a precise balance between representing the self and representing other people is crucial in social interaction. This ability is required for imitation, perspective-taking, theory of mind and empathy; and disruption to this ability may contribute to the symptoms of clinical and sub-clinical conditions, including autism spectrum disorder and mirror-touch synaesthesia. Moving beyond correlational approaches, a recent intervention study demonstrated that training participants to control representations of the self and others improves their ability to control imitative behaviour, and to take another''s visual perspective. However, it is unclear whether these effects apply to other areas of social interaction, such as the ability to empathize with others. We report original data showing that participants trained to increase self–other control in the motor domain demonstrated increased empathic corticospinal responses (Experiment 1) and self-reported empathy (Experiment 2), as well as an increased ability to control imitation. These results suggest that the ability to control self and other representations contributes to empathy as well as to other types of social interaction.  相似文献   

17.
Recent evidence suggests that lexical-semantic activation spread during language production can be dynamically shaped by contextual factors. In this study we investigated whether semantic processing modes can also affect lexical-semantic activation during word production. Specifically, we tested whether the processing of linguistic ambiguities, presented in the form of puns, has an influence on the co-activation of unrelated meanings of homophones in a subsequent language production task. In a picture-word interference paradigm with word distractors that were semantically related or unrelated to the non-depicted meanings of homophones we found facilitation induced by related words only when participants listened to puns before object naming, but not when they heard jokes with unambiguous linguistic stimuli. This finding suggests that a semantic processing mode of ambiguity perception can induce the co-activation of alternative homophone meanings during speech planning.  相似文献   

18.
Janssen N  Barber HA 《PloS one》2012,7(3):e33202
A classic debate in the psychology of language concerns the question of the grain-size of the linguistic information that is stored in memory. One view is that only morphologically simple forms are stored (e.g., 'car', 'red'), and that more complex forms of language such as multi-word phrases (e.g., 'red car') are generated on-line from the simple forms. In two experiments we tested this view. In Experiment 1, participants produced noun+adjective and noun+noun phrases that were elicited by experimental displays consisting of colored line drawings and two superimposed line drawings. In Experiment 2, participants produced noun+adjective and determiner+noun+adjective utterances elicited by colored line drawings. In both experiments, naming latencies decreased with increasing frequency of the multi-word phrase, and were unaffected by the frequency of the object name in the utterance. These results suggest that the language system is sensitive to the distribution of linguistic information at grain-sizes beyond individual words.  相似文献   

19.
Given ample evidence for shared cortical structures involved in encoding actions, whether or not subsequently executed, a still unsolved problem is the identification of neural mechanisms of motor inhibition, preventing “covert actions” as motor imagery from being performed, in spite of the activation of the motor system. The principal aims of the present study were the evaluation of: 1) the presence in covert actions as motor imagery of putative motor inhibitory mechanisms; 2) their underlying cerebral sources; 3) their differences or similarities with respect to cerebral networks underpinning the inhibition of overt actions during a Go/NoGo task. For these purposes, we performed a high density EEG study evaluating the cerebral microstates and their related sources elicited during two types of Go/NoGo tasks, requiring the execution or withholding of an overt or a covert imagined action, respectively. Our results show for the first time the engagement during motor imagery of key nodes of a putative inhibitory network (including pre-supplementary motor area and right inferior frontal gyrus) partially overlapping with those activated for the inhibition of an overt action during the overt NoGo condition. At the same time, different patterns of temporal recruitment in these shared neural inhibitory substrates are shown, in accord with the intended overt or covert modality of action performance. The evidence that apparently divergent mechanisms such as controlled inhibition of overt actions and contingent automatic inhibition of covert actions do indeed share partially overlapping neural substrates, further challenges the rigid dichotomy between conscious, explicit, flexible and unconscious, implicit, inflexible forms of motor behavioral control.  相似文献   

20.
Movement formulas, engrams, kinesthetic images and internal models of the body in action are notions derived mostly from clinical observations of brain-damaged subjects. They also suggest that the prehensile geometry of an object is integrated in the neural circuits and includes the object's graspable characteristics as well as its semantic properties. In order to determine whether there is a conjoined representation of the graspable characteristics of an object in relation to the actual grasping, it is necessary to separate the graspable (low-level) from the semantic (high-level) properties of the object. Right-handed subjects were asked to grasp and lift a smooth 300-g cylinder with one hand, before and after judging the level of difficulty of a "grasping for pouring" action, involving a smaller cylinder and using the opposite hand. The results showed that simulated grasps with the right hand exert a direct influence on actual motor acts with the left hand. These observations add to the evidence that there is a conjoined representation of the graspable characteristics of the object and the biomechanical constraints of the arm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号