首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime.  相似文献   

2.
Using functional magnetic resonance imaging during a primed visual lexical decision task, we investigated the neural and functional mechanisms underlying modulations of semantic word processing through hypnotic suggestions aimed at altering lexical processing of primes. The priming task was to discriminate between target words and pseudowords presented 200 ms after the prime word which was semantically related or unrelated to the target. In a counterbalanced study design, each participant performed the task once at normal wakefulness and once after the administration of hypnotic suggestions to perceive the prime as a meaningless symbol of a foreign language. Neural correlates of priming were defined as significantly lower activations upon semantically related compared to unrelated trials. We found significant suggestive treatment-induced reductions in neural priming, albeit irrespective of the degree of suggestibility. Neural priming was attenuated upon suggestive treatment compared with normal wakefulness in brain regions supporting automatic (fusiform gyrus) and controlled semantic processing (superior and middle temporal gyri, pre- and postcentral gyri, and supplementary motor area). Hence, suggestions reduced semantic word processing by conjointly dampening both automatic and strategic semantic processes.  相似文献   

3.
When and how do infants develop a semantic system of words that are related to each other? We investigated word–word associations in early lexical development using an adaptation of the inter-modal preferential looking task where word pairs (as opposed to single target words) were used to direct infants’ attention towards a target picture. Two words (prime and target) were presented in quick succession after which infants were presented with a picture pair (target and distracter). Prime–target word pairs were either semantically and associatively related or unrelated; the targets were either named or unnamed. Experiment 1 demonstrated a lexical–semantic priming effect for 21-month olds but not for 18-month olds: unrelated prime words interfered with linguistic target identification for 21-month olds. Follow-up experiments confirmed the interfering effects of unrelated prime words and identified the existence of repetition priming effects as young as 18 months of age. The results of these experiments indicate that infants have begun to develop semantic–associative links between lexical items as early as 21 months of age.  相似文献   

4.
In this paper we present a novel theory of the cognitive and neural processes by which adults learn new spoken words. This proposal builds on neurocomputational accounts of lexical processing and spoken word recognition and complementary learning systems (CLS) models of memory. We review evidence from behavioural studies of word learning that, consistent with the CLS account, show two stages of lexical acquisition: rapid initial familiarization followed by slow lexical consolidation. These stages map broadly onto two systems involved in different aspects of word learning: (i) rapid, initial acquisition supported by medial temporal and hippocampal learning, (ii) slower neocortical learning achieved by offline consolidation of previously acquired information. We review behavioural and neuroscientific evidence consistent with this account, including a meta-analysis of PET and functional Magnetic Resonance Imaging (fMRI) studies that contrast responses to spoken words and pseudowords. From this meta-analysis we derive predictions for the location and direction of cortical response changes following familiarization with pseudowords. This allows us to assess evidence for learning-induced changes that convert pseudoword responses into real word responses. Results provide unique support for the CLS account since hippocampal responses change during initial learning, whereas cortical responses to pseudowords only become word-like if overnight consolidation follows initial learning.  相似文献   

5.
The results of research on the processing of morphologically complex words are consistent with a lexical system that activates both whole-word and constituent representations during word recognition. In this study, we focus on written production and examine whether semantically priming the first constituent of a compound influences the ease of producing a compound (as measured by typing latencies), and whether any such priming effect depends on the semantic transparency of the compound’s constituents. We found that semantic transparency of the constituents affects whether semantic priming results in changes to processing. However, it is not only the semantic transparency of the primed constituent that exerts an influence—for example, the semantic transparency of the head affects whether semantically priming the modifier results in a change in typing times. We discuss these effects in terms of competition among the various representations as the compound is output, such that overall performance is a combination of facilitation and inhibition that changes over the course of the output.  相似文献   

6.
Spatiotemporal dynamics of modality-specific and supramodal word processing   总被引:13,自引:0,他引:13  
The ability of written and spoken words to access the same semantic meaning provides a test case for the multimodal convergence of information from sensory to associative areas. Using anatomically constrained magnetoencephalography (aMEG), the present study investigated the stages of word comprehension in real time in the auditory and visual modalities, as subjects participated in a semantic judgment task. Activity spread from the primary sensory areas along the respective ventral processing streams and converged in anterior temporal and inferior prefrontal regions, primarily on the left at around 400 ms. Comparison of response patterns during repetition priming between the two modalities suggest that they are initiated by modality-specific memory systems, but that they are eventually elaborated mainly in supramodal areas.  相似文献   

7.
Neural network models describe semantic priming effects by way of mechanisms of activation of neurons coding for words that rely strongly on synaptic efficacies between pairs of neurons. Biologically inspired Hebbian learning defines efficacy values as a function of the activity of pre- and post-synaptic neurons only. It generates only pair associations between words in the semantic network. However, the statistical analysis of large text databases points to the frequent occurrence not only of pairs of words (e.g., “the way”) but also of patterns of more than two words (e.g., “by the way”). The learning of these frequent patterns of words is not reducible to associations between pairs of words but must take into account the higher level of coding of three-word patterns. The processing and learning of pattern of words challenges classical Hebbian learning algorithms used in biologically inspired models of priming. The aim of the present study was to test the effects of patterns on the semantic processing of words and to investigate how an inter-synaptic learning algorithm succeeds at reproducing the experimental data. The experiment manipulates the frequency of occurrence of patterns of three words in a multiple-paradigm protocol. Results show for the first time that target words benefit more priming when embedded in a pattern with the two primes than when only associated with each prime in pairs. A biologically inspired inter-synaptic learning algorithm is tested that potentiates synapses as a function of the activation of more than two pre- and post-synaptic neurons. Simulations show that the network can learn patterns of three words to reproduce the experimental results.  相似文献   

8.
Multiple semantic priming processes between several related and/or unrelated words are at work during the processing of sequences of words. Multiple priming generates rich dynamics of effects depending on the relationship between the target word and the first and/or second prime previously presented. The experimental literature suggests that during the on-line processing of the primes, the activation can shift from associates to the first prime to associates to the second prime. Though the semantic priming shift is central to the on-line and rapid updating of word meanings in the working memory, its precise dynamics are still poorly understood and it is still a challenge to model how it functions in the cerebral cortex. Four multiple priming experiments are proposed that cross-manipulate delays and association strength between the primes and the target. Results show for the first time that association strength determines complex dynamics of the semantic priming shift, ranging from an absence of a shift to a complete shift. A cortical network model of spike frequency adaptive neuron populations is proposed to account for the non-continuous evolution of the priming shift over time. It allows linking the dynamics of the priming shift assessed at the behavioral level to the non-linear dynamics of the firing rates of neurons populations.  相似文献   

9.
To determine when and how L2 learners start to process L2 words affectively and semantically, we conducted a longitudinal study on their interaction in adult L2 learners. In four test sessions, spanning half a year of L2 learning, we monitored behavioral and ERP learning-related changes for one and the same set of words by means of a primed lexical-decision paradigm with L1 primes and L2 targets. Sensitivity rates, accuracy rates, RTs, and N400 amplitude to L2 words and pseudowords improved significantly across sessions. A semantic priming effect (e.g, prime “driver”facilitating response to target “street”) was found in accuracy rates and RTs when collapsing Sessions 1 to 4, while this effect modulated ERP amplitudes within the first 300 ms of L2 target processing. An overall affective priming effect (e.g., “sweet” facilitating”taste”) was also found in RTs and ERPs (posterior P1). Importantly, the ERPs showed an L2 valence effect across sessions (e.g., positive words were easier to process than neutral words), indicating that L2 learners were sensitive to L2 affective meaning. Semantic and affective priming interacted in the N400 time-window only in Session 4, implying that they affected meaning integration during L2 immersion together. The results suggest that L1 and L2 are initially processed semantically and affectively via relatively separate channels that are more and more linked contingent on L2 exposure.  相似文献   

10.
During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading.  相似文献   

11.
This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no 'magic moment' when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.  相似文献   

12.
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.  相似文献   

13.
I Lerner  S Bentin  O Shriki 《PloS one》2012,7(7):e40663
One of the most pervasive findings in studies of schizophrenics with thought disorders is their peculiar pattern of semantic priming, which presumably reflects abnormal associative processes in the semantic system of these patients. Semantic priming is manifested by faster and more accurate recognition of a word-target when preceded by a semantically related prime, relative to an unrelated prime condition. Compared to control, semantic priming in schizophrenics is characterized by reduced priming effects at long prime-target Stimulus Onset Asynchrony (SOA) and, sometimes, augmented priming at short SOA. In addition, unlike controls, schizophrenics consistently show indirect (mediated) priming (such as from the prime 'wedding' to the target 'finger', mediated by 'ring'). In a previous study, we developed a novel attractor neural network model with synaptic adaptation mechanisms that could account for semantic priming patterns in healthy individuals. Here, we examine the consequences of introducing attractor instability to this network, which is hypothesized to arise from dysfunctional synaptic transmission known to occur in schizophrenia. In two simulated experiments, we demonstrate how such instability speeds up the network's dynamics and, consequently, produces the full spectrum of priming effects previously reported in patients. The model also explains the inconsistency of augmented priming results at short SOAs using directly related pairs relative to the consistency of indirect priming. Further, we discuss how the same mechanism could account for other symptoms of the disease, such as derailment ('loose associations') or the commonly seen difficulty of patients in utilizing context. Finally, we show how the model can statistically implement the overly-broad wave of spreading activation previously presumed to characterize thought-disorders in schizophrenia.  相似文献   

14.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.  相似文献   

15.
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.  相似文献   

16.
The present study uses the N400 component of event-related potentials (ERPs) as a processing marker of single spoken words presented during sleep. Thirteen healthy volunteers participated in the study. The auditory ERPs were registered in response to a semantic priming paradigm made up of pairs of words (50% related, 50% unrelated) presented in the waking state and during sleep stages II, III–IV and REM. The amplitude, latency and scalp distribution parameters of the negativity observed during stage II and the REM stage were contrasted with the results obtained in the waking state. The `N400-like' effect elicited in these stages of sleep showed a mean amplitude for pairs of unrelated words significantly greater than for related pairs and an increment of latency. These results suggest that during these sleep stages a semantic priming effect is maintained actively although the lexical processing time increases.  相似文献   

17.
Objectives: To characterize the effects of normal aging on the amplitude, latency and scalp distribution of the N400 congruity effect.Methods: Event-related brain potentials (ERPs) were recorded from 72 adults (half of them men) between the ages of 20 and 80 years (12/decade) as they performed a semantic categorization task. Participants listened to spoken phrases (e.g. `a type of fruit' or `the opposite of black') followed about 1 s later by a visually-presented word that either did or did not fit with the sense of the preceding phrase; they reported the word read and whether or not it was appropriate. ERP measurements (mean amplitudes, peak amplitudes, peak latencies) were subjected to analysis of variance and linear regression analyses.Results: All participants, regardless of age, produced larger N400s to words that did not fit than to those that did. The N400 congruity effect (no-fit ERPs−fit ERPs) showed a reliable linear decrease in the amplitude (0.05–0.09 μV per year, r=0.40) and a reliable linear increase peak latency (1.5–2.1 ms/year, r=0.60) with age.Conclusions: In sum, the N400 semantic congruity effect at the scalp gets smaller, slower and more variable with age, consistent with a quantitative rather than qualitative change in semantic processing (integration) with normal aging.  相似文献   

18.
Three target words (T1, T2, and T3) were embedded in a rapid serial visual presentation (RSVP) stream of non-word distractors, and participants were required to report the targets at the end of each RSVP stream. T2 and T3 were semantically related words in half of the RSVP streams, and semantically unrelated words in the other half of the RSVP streams. Using an identical design, a recent study reported distinct reflections of the T2–T3 semantic relationship on the P2 and N400 components of event-related potentials (ERPs) time-locked to T3, suggesting an early, automatic, source of P2 semantic effects and a late, controlled, source of N400 semantic effects. Here, P2 and N400 semantic effects were examined by manipulating list-wide context. Relative to participants performing in a semantically unbiased context, participants over-exposed to filler RSVP streams always including semantically related T2/T3 words reported a dilution of T3-locked P2 semantic effects and a magnification of T3-locked N400 semantic effects. Opposite effects on P2 and N400 ERP components of list-wide semantic context are discussed in relation to recent proposals on the representational status of RSVP targets at processing stages prior to consolidation in visual short-term memory.  相似文献   

19.
The visual perception of words is known to activate the auditory representation of their spoken forms automatically. We examined the neural mechanism for this phonological activation using transcranial magnetic stimulation (TMS) with a masked priming paradigm. The stimulation sites (left superior temporal gyrus [L-STG] and inferior parietal lobe [L-IPL]), modality of targets (visual and auditory), and task (pronunciation and lexical decision) were manipulated independently. For both within- and cross-modal conditions, the repetition priming during pronunciation was eliminated when TMS was applied to the L-IPL, but not when applied to the L-STG, whereas the priming during lexical decision was eliminated when the L-STG, but not the L-IPL, was stimulated. The observed double dissociation suggests that the conscious task instruction modulates the stimulus-driven activation of the lateral temporal cortex for lexico-phonological activation and the inferior parietal cortex for spoken word production, and thereby engages a different neural network for generating the appropriate behavioral response.  相似文献   

20.
The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号