首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading.  相似文献   

2.
This study investigated whether semantic integration in discourse context could be influenced by topic structure using event-related brain potentials. Participants read discourses in which the last sentence contained a critical word that was either congruent or incongruent with the topic established in the first sentence. The intervening sentences between the first and the last sentence of the discourse either maintained or shifted the original topic. Results showed that incongruent words in topic-maintained discourses elicited an N400 effect that was broadly distributed over the scalp while those in topic-shifted discourses elicited an N400 effect that was lateralized to the right hemisphere and localized over central and posterior areas. Moreover, a late positivity effect was only elicited by incongruent words in topic-shifted discourses, but not in topic-maintained discourses. This suggests an important role for discourse structure in semantic integration, such that compared with topic-maintained discourses, the complexity of discourse structure in topic-shifted condition reduces the initial stage of semantic integration and enhances the later stage in which a mental representation is updated.  相似文献   

3.
Although several cognitive processes, including speech processing, have been studied during sleep, working memory (WM) has never been explored up to now. Our study assessed the capacity of WM by testing speech perception when the level of background noise and the sentential semantic length (SSL) (amount of semantic information required to perceive the incongruence of a sentence) were modulated. Speech perception was explored with the N400 component of the event-related potentials recorded to sentence final words (50% semantically congruent with the sentence, 50% semantically incongruent). During sleep stage 2 and paradoxical sleep: (1) without noise, a larger N400 was observed for (short and long SSL) sentences ending with a semantically incongruent word compared to a congruent word (i.e. an N400 effect); (2) with moderate noise, the N400 effect (observed at wake with short and long SSL sentences) was attenuated for long SSL sentences. Our results suggest that WM for linguistic information is partially preserved during sleep with a smaller capacity compared to wake.  相似文献   

4.
Event-related potentials were used to investigate whether semantic integration in discourse is influenced by the number of intervening sentences between the endpoints of integration. Readers read discourses in which the last sentence contained a critical word that was either congruent or incongruent with the information introduced in the first sentence. Furthermore, for the short discourses, the first and last sentence were intervened by only one sentence while for the long discourses, they were intervened by three sentences. We found that the incongruent words elicited an N400 effect for both the short and long discourses. However, a P600 effect was only observed for the long discourses, but not for the short ones. These results suggest that although readers can successfully integrate upcoming words into the existing discourse representation, the effort required for this integration process is modulated by the number of intervening sentences. Thus, discourse distance as measured by the number of intervening sentences should be taken as an important factor for semantic integration in discourse.  相似文献   

5.
To assess the effects of normal aging and senile dementia of the Alzheimer's type (SDAT) on semantic analysis of words, we examined the N400 component of the event-related potential (ERP) elicited during the processing of highly constrained (opposites) and less constrained materials (category-category exemplars) in 12 young control subjects, 12 elderly control subjects and 12 patients with SDAT. We employed a priming paradigm in which a context phrase was spoken and a target word (congruent or incongruent) was presented visually. The N400 effect was reduced in amplitude and delayed in the elderly control group relative to that of the younger subjects, and was further attenuated in amplitude, delayed in latency and somewhat flatter in its distribution across the scalp in the SDAT patients. These findings are consistent with less efficient processing and integration of lexical items with semantic context in normal aging, which is further exacerbated by SDAT. Differences in the N400 range associated with the opposite and category conditions were observed only in the young subjects, suggesting less use of controlled attentional resources or perhaps weaker associative links with age.  相似文献   

6.
Wang K  Cheung EF  Gong QY  Chan RC 《PloS one》2011,6(10):e25435

Background

Theoretically semantic processing can be separated into early automatic semantic activation and late contextualization. Semantic processing deficits have been suggested in patients with schizophrenia, however it is not clear which stage of semantic processing is impaired. We attempted to clarify this issue by conducting a meta-analysis of the N400 component.

Methods

Twenty-one studies met the inclusion criteria for the meta-analysis procedure. The Comprehensive Meta-Analysis software package was used to compute pooled effect sizes and homogeneity.

Results

Studies favoring early automatic activation produced a significant effect size of −0.41 for the N400 effect. Studies favoring late contextualization generated a significant effect size of −0.36 for the N400 effect, a significant effect size of −0.52 for N400 for congruent/related target words, and a significant effect size of 0.82 for the N400 peak latency.

Conclusion

These findings suggest the automatic spreading activation process in patients with schizophrenia is very similar for closely related concepts and weakly or remotely related concepts, while late contextualization may be associated with impairments in processing semantically congruent context accompanied by slow processing speed.  相似文献   

7.
8.
9.
The dual-route model of speech processing includes a dorsal stream that maps auditory to motor features at the sublexical level rather than at the lexico-semantic level. However, the literature on gesture is an invitation to revise this model because it suggests that the premotor cortex of the dorsal route is a major site of lexico-semantic interaction. Here we investigated lexico-semantic mapping using word-gesture pairs that were either congruent or incongruent. Using fMRI-adaptation in 28 subjects, we found that temporo-parietal and premotor activity during auditory processing of single action words was modulated by the prior audiovisual context in which the words had been repeated. The BOLD signal was suppressed following repetition of the auditory word alone, and further suppressed following repetition of the word accompanied by a congruent gesture (e.g. [“grasp” + grasping gesture]). Conversely, repetition suppression was not observed when the same action word was accompanied by an incongruent gesture (e.g. [“grasp” + sprinkle]). We propose a simple model to explain these results: auditory and visual information converge onto premotor cortex where it is represented in a comparable format to determine (in)congruence between speech and gesture. This ability of the dorsal route to detect audiovisual semantic (in)congruence suggests that its function is not restricted to the sublexical level.  相似文献   

10.
It has been proposed that actions are intrinsically linked to perception and that imagining, observing, preparing, or in any way representing an action excites the motor programs used to execute that same action. There is neurophysiological evidence that certain brain regions involved in executing actions are activated by the mere observation of action (the so-called "mirror system;" ). However, it is unknown whether this mirror system causes interference between observed and simultaneously executed movements. In this study we test the hypothesis that, because of the overlap between action observation and execution, observed actions should interfere with incongruous executed actions. Subjects made arm movements while observing either a robot or another human making the same or qualitatively different arm movements. Variance in the executed movement was measured as an index of interference to the movement. The results demonstrate that observing another human making incongruent movements has a significant interference effect on executed movements. However, we found no evidence that this interference effect occurred when subjects observed a robotic arm making incongruent movements. These results suggest that the simultaneous activation of the overlapping neural networks that process movement observation and execution infers a measurable cost to motor control.  相似文献   

11.
Neuroscience research during the past ten years has fundamentally changed the traditional view of the motor system. In monkeys, the finding that premotor neurons also discharge during visual stimulation (visuomotor neurons) raises new hypotheses about the putative role played by motor representations in perceptual functions. Among visuomotor neurons, mirror neurons might be involved in understanding the actions of others and might, therefore, be crucial in interindividual communication. Functional brain imaging studies enabled us to localize the human mirror system, but the demonstration that the motor cortex dynamically replicates the observed actions, as if they were executed by the observer, can only be given by fast and focal measurements of cortical activity. Transcranial magnetic stimulation enables us to instantaneously estimate corticospinal excitability, and has been used to study the human mirror system at work during the perception of actions performed by other individuals. In the past ten years several TMS experiments have been performed investigating the involvement of motor system during others' action observation. Results suggest that when we observe another individual acting we strongly 'resonate' with his or her action. In other words, our motor system simulates underthreshold the observed action in a strictly congruent fashion. The involved muscles are the same as those used in the observed action and their activation is temporally strictly coupled with the dynamics of the observed action.  相似文献   

12.
Using the event-related optical signal (EROS) technique, this study investigated the dynamics of semantic brain activation during sentence comprehension. Participants read sentences constituent-by-constituent and made a semantic judgment at the end of each sentence. The EROSs were recorded simultaneously with ERPs and time-locked to expected or unexpected sentence-final target words. The unexpected words evoked a larger N400 and a late positivity than the expected ones. Critically, the EROS results revealed activations first in the left posterior middle temporal gyrus (LpMTG) between 128 and 192 ms, then in the left anterior inferior frontal gyrus (LaIFG), the left middle frontal gyrus (LMFG), and the LpMTG in the N400 time window, and finally in the left posterior inferior frontal gyrus (LpIFG) between 832 and 864 ms. Also, expected words elicited greater activation than unexpected words in the left anterior temporal lobe (LATL) between 192 and 256 ms. These results suggest that the early lexical-semantic retrieval reflected by the LpMTG activation is followed by two different semantic integration processes: a relatively rapid and transient integration in the LATL and a relatively slow but enduring integration in the LaIFG/LMFG and the LpMTG. The late activation in the LpIFG, however, may reflect cognitive control.  相似文献   

13.

Background

Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials.

Methodology

Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets.

Conclusions

Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required.  相似文献   

14.
A portion of Stroop interference is thought to arise from a failure to maintain goal-oriented behaviour (or goal neglect). The aim of the present study was to investigate whether goal- relevant primes could enhance goal maintenance and reduce the Stroop interference effect. Here it is shown that primes related to the goal of responding quickly in the Stroop task (e.g. fast, quick, hurry) substantially reduced Stroop interference by reducing reaction times to incongruent trials but increasing reaction times to congruent and neutral trials. No effects of the primes were observed on errors. The effects on incongruent, congruent and neutral trials are explained in terms of the influence of the primes on goal maintenance. The results show that goal priming can facilitate goal-oriented behaviour and indicate that automatic processing can modulate executive control.  相似文献   

15.
The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.  相似文献   

16.
Visuomotor interference occurs when the execution of an action is facilitated by the concurrent observation of the same action and hindered by the concurrent observation of a different action. There is evidence that visuomotor interference can be modulated top-down by higher cognitive functions, depending on whether own performed actions or observed actions are selectively attended. Here, we studied whether these effects of cognitive context on visuomotor interference are also dependent on the point-of-view of the observed action. We employed a delayed go/no-go task known to induce visuomotor interference. Static images of hand gestures in either egocentric or allocentric perspective were presented as “go” stimuli after participants were pre-cued to prepare either a matching (congruent) or non-matching (incongruent) action. Participants performed this task in two different cognitive contexts: In one, they focused on the visual image of the hand gesture shown as the go stimulus (image context), whereas in the other they focused on the hand gesture they performed (action context). We analyzed reaction times to initiate the prepared action upon presentation of the gesture image and found evidence of visuomotor interference in both contexts and for both perspectives. Strikingly, results show that the effect of cognitive context on visuomotor interference also depends on the perspective of observed actions. When focusing on own-actions, visuomotor interference was significantly less for gesture images in allocentric perspective than in egocentric perspective; when focusing on observed actions, visuomotor interference was present regardless of the perspective of the gesture image. Overall these data suggest that visuomotor interference may be modulated by higher cognitive processes, so that when we are specifically attending to our own actions, images depicting others’ actions (allocentric perspective) have much less interference on our own actions.  相似文献   

17.
How pictures and words are stored and processed in the human brain constitute a long-standing question in cognitive psychology. Behavioral studies have yielded a large amount of data addressing this issue. Generally speaking, these data show that there are some interactions between the semantic processing of pictures and words. However, behavioral methods can provide only limited insight into certain findings. Fortunately, Event-Related Potential (ERP) provides on-line cues about the temporal nature of cognitive processes and contributes to the exploration of their neural substrates. ERPs have been used in order to better understand semantic processing of words and pictures. The main objective of this article is to offer an overview of the electrophysiologic bases of semantic processing of words and pictures. Studies presented in this article showed that the processing of words is associated with an N 400 component, whereas pictures elicited both N 300 and N 400 components. Topographical analysis of the N 400 distribution over the scalp is compatible with the idea that both image-mediated concrete words and pictures access an amodal semantic system. However, given the distinctive N 300 patterns, observed only during picture processing, it appears that picture and word processing rely upon distinct neuronal networks, even if they end up activating more or less similar semantic representations.  相似文献   

18.
Li Y  Wang G  Long J  Yu Z  Huang B  Li X  Yu T  Liang C  Li Z  Sun P 《PloS one》2011,6(6):e20801
One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.  相似文献   

19.
20.
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号