首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.  相似文献   

2.

Objective

Word finding depends on the processing of semantic and lexical information, and it involves an intermediate level for mapping semantic-to-lexical information which also subserves lexical-to-semantic mapping during word comprehension. However, the brain regions implementing these components are still controversial and have not been clarified via a comprehensive lesion model encompassing the whole range of language-related cortices. Primary progressive aphasia (PPA), for which anomia is thought to be the most common sign, provides such a model, but the exploration of cortical areas impacting naming in its three main variants and the underlying processing mechanisms is still lacking.

Methods

We addressed this double issue, related to language structure and PPA, with thirty patients (11 semantic, 12 logopenic, 7 agrammatic variant) using a picture-naming task and voxel-based morphometry for anatomo-functional correlation. First, we analyzed correlations for each of the three variants to identify the regions impacting naming in PPA and to disentangle the core regions of word finding. We then combined the three variants and correlation analyses for naming (semantic-to-lexical mapping) and single-word comprehension (lexical-to-semantic mapping), predicting an overlap zone corresponding to a bidirectional lexical-semantic hub.

Results and Conclusions

Our results showed that superior portions of the left temporal pole and left posterior temporal cortices impact semantic and lexical naming mechanisms in semantic and logopenic PPA, respectively. In agrammatic PPA naming deficits were rare, and did not correlate with any cortical region. Combined analyses revealed a cortical overlap zone in superior/middle mid-temporal cortices, distinct from the two former regions, impacting bidirectional binding of lexical and semantic information. Altogether, our findings indicate that lexical/semantic word processing depends on an anterior-posterior axis within lateral-temporal cortices, including an anatomically intermediate hub dedicated to lexical-semantic integration. Within this axis our data reveal the underpinnings of anomia in the PPA variants, which is of relevance for both diagnosis and future therapy strategies.  相似文献   

3.
Lexical skills are a crucial component of language comprehension and production. This paper reviews evidence for lexical-level deficits in children and young people with developmental language impairment (LI). Across a range of tasks, LI is associated with reduced vocabulary knowledge in terms of both breadth and depth and difficulty with learning and retaining new words; evidence is emerging from on-line tasks to suggest that low levels of language skill are associated with differences in lexical competition in spoken word recognition. The role of lexical deficits in understanding the nature of LI is also discussed.  相似文献   

4.
Primary progressive aphasia (PPA) is a neurodegenerative syndrome characterized by an insidious onset and gradual progression of deficits that can involve any aspect of language, including word finding, object naming, fluency, syntax, phonology and word comprehension. The initial symptoms occur in the absence of major deficits in other cognitive domains, including episodic memory, visuospatial abilities and visuoconstruction. According to recent diagnostic guidelines, PPA is typically divided into three variants: nonfluent variant PPA (also termed progressive nonfluent aphasia), semantic variant PPA (also termed semantic dementia) and logopenic/phonological variant PPA (also termed logopenic progressive aphasia). The paper describes a 79-yr old man, who presented with normal motor speech and production rate, impaired single word retrieval and phonemic errors in spontaneous speech and confrontational naming. Confrontation naming was strongly affected by lexical frequency. He was impaired on repetition of sentences and phrases. Reading was intact for regularly spelled words but not for irregular words (surface dyslexia). Comprehension was spared at the single word level, but impaired for complex sentences. He performed within the normal range on the Dutch equivalent of the Pyramids and Palm Trees (PPT) Pictures Test, indicating that semantic processing was preserved. There was, however, a slight deficiency on the PPT Words Test, which appeals to semantic knowledge of verbal associations. His core deficit was interpreted as an inability to retrieve stored lexical-phonological information for spoken word production in spontaneous speech, confrontation naming, repetition and reading aloud.  相似文献   

5.
Humans can recognize spoken words with unmatched speed and accuracy. Hearing the initial portion of a word such as "formu…" is sufficient for the brain to identify "formula" from the thousands of other words that partially match. Two alternative computational accounts propose that partially matching words (1) inhibit each other until a single word is selected ("formula" inhibits "formal" by lexical competition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error is minimal after sequences like "formu…"). To distinguish these theories we taught participants novel words (e.g., "formubo") that sound like existing words ("formula") on two successive days. Computational simulations show that knowing "formubo" increases lexical competition when hearing "formu…", but reduces segment prediction error. Conversely, when the sounds in "formula" and "formubo" diverge, the reverse is observed. The time course of magnetoencephalographic brain responses in the superior temporal gyrus (STG) is uniquely consistent with a segment prediction account. We propose a predictive coding model of spoken word recognition in which STG neurons represent the difference between predicted and heard speech sounds. This prediction error signal explains the efficiency of human word recognition and simulates neural responses in auditory regions.  相似文献   

6.
In this paper we present a novel theory of the cognitive and neural processes by which adults learn new spoken words. This proposal builds on neurocomputational accounts of lexical processing and spoken word recognition and complementary learning systems (CLS) models of memory. We review evidence from behavioural studies of word learning that, consistent with the CLS account, show two stages of lexical acquisition: rapid initial familiarization followed by slow lexical consolidation. These stages map broadly onto two systems involved in different aspects of word learning: (i) rapid, initial acquisition supported by medial temporal and hippocampal learning, (ii) slower neocortical learning achieved by offline consolidation of previously acquired information. We review behavioural and neuroscientific evidence consistent with this account, including a meta-analysis of PET and functional Magnetic Resonance Imaging (fMRI) studies that contrast responses to spoken words and pseudowords. From this meta-analysis we derive predictions for the location and direction of cortical response changes following familiarization with pseudowords. This allows us to assess evidence for learning-induced changes that convert pseudoword responses into real word responses. Results provide unique support for the CLS account since hippocampal responses change during initial learning, whereas cortical responses to pseudowords only become word-like if overnight consolidation follows initial learning.  相似文献   

7.
Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.  相似文献   

8.
Spatiotemporal dynamics of modality-specific and supramodal word processing   总被引:13,自引:0,他引:13  
The ability of written and spoken words to access the same semantic meaning provides a test case for the multimodal convergence of information from sensory to associative areas. Using anatomically constrained magnetoencephalography (aMEG), the present study investigated the stages of word comprehension in real time in the auditory and visual modalities, as subjects participated in a semantic judgment task. Activity spread from the primary sensory areas along the respective ventral processing streams and converged in anterior temporal and inferior prefrontal regions, primarily on the left at around 400 ms. Comparison of response patterns during repetition priming between the two modalities suggest that they are initiated by modality-specific memory systems, but that they are eventually elaborated mainly in supramodal areas.  相似文献   

9.
Memory traces for words are frequently conceptualized neurobiologically as networks of neurons interconnected via reciprocal links developed through associative learning in the process of language acquisition. Neurophysiological reflection of activation of such memory traces has been reported using the mismatch negativity brain potential (MMN), which demonstrates an enhanced response to meaningful words over meaningless items. This enhancement is believed to be generated by the activation of strongly intraconnected long-term memory circuits for words that can be automatically triggered by spoken linguistic input and that are absent for unfamiliar phonological stimuli. This conceptual framework critically predicts different amounts of activation depending on the strength of the word's lexical representation in the brain. The frequent use of words should lead to more strongly connected representations, whereas less frequent items would be associated with more weakly linked circuits. A word with higher frequency of occurrence in the subject's language should therefore lead to a more pronounced lexical MMN response than its low-frequency counterpart. We tested this prediction by comparing the event-related potentials elicited by low- and high-frequency words in a passive oddball paradigm; physical stimulus contrasts were kept identical. We found that, consistent with our prediction, presenting the high-frequency stimulus led to a significantly more pronounced MMN response relative to the low-frequency one, a finding that is highly similar to previously reported MMN enhancement to words over meaningless pseudowords. Furthermore, activation elicited by the higher-frequency word peaked earlier relative to low-frequency one, suggesting more rapid access to frequently used lexical entries. These results lend further support to the above view on word memory traces as strongly connected assemblies of neurons. The speed and magnitude of their activation appears to be linked to the strength of internal connections in a memory circuit, which is in turn determined by the everyday use of language elements.  相似文献   

10.
To assess the effects of normal aging and senile dementia of the Alzheimer's type (SDAT) on semantic analysis of words, we examined the N400 component of the event-related potential (ERP) elicited during the processing of highly constrained (opposites) and less constrained materials (category-category exemplars) in 12 young control subjects, 12 elderly control subjects and 12 patients with SDAT. We employed a priming paradigm in which a context phrase was spoken and a target word (congruent or incongruent) was presented visually. The N400 effect was reduced in amplitude and delayed in the elderly control group relative to that of the younger subjects, and was further attenuated in amplitude, delayed in latency and somewhat flatter in its distribution across the scalp in the SDAT patients. These findings are consistent with less efficient processing and integration of lexical items with semantic context in normal aging, which is further exacerbated by SDAT. Differences in the N400 range associated with the opposite and category conditions were observed only in the young subjects, suggesting less use of controlled attentional resources or perhaps weaker associative links with age.  相似文献   

11.

Background

For word production, we may consciously pursue semantic or phonological search strategies, but it is uncertain whether we can retrieve the different aspects of lexical information independently from each other. We therefore studied the spread of semantic information into words produced under exclusively phonemic task demands.

Methods

42 subjects participated in a letter verbal fluency task, demanding the production of as many s-words as possible in two minutes. Based on curve fittings for the time courses of word production, output spurts (temporal clusters) considered to reflect rapid lexical retrieval based on automatic activation spread, were identified. Semantic and phonemic word relatedness within versus between these clusters was assessed by respective scores (0 meaning no relation, 4 maximum relation).

Results

Subjects produced 27.5 (±9.4) words belonging to 6.7 (±2.4) clusters. Both phonemically and semantically words were more related within clusters than between clusters (phon: 0.33±0.22 vs. 0.19±0.17, p<.01; sem: 0.65±0.29 vs. 0.37±0.29, p<.01). Whereas the extent of phonemic relatedness correlated with high task performance, the contrary was the case for the extent of semantic relatedness.

Conclusion

The results indicate that semantic information spread occurs, even if the consciously pursued word search strategy is purely phonological. This, together with the negative correlation between semantic relatedness and verbal output suits the idea of a semantic default mode of lexical search, acting against rapid task performance in the given scenario of phonemic verbal fluency. The simultaneity of enhanced semantic and phonemic word relatedness within the same temporal cluster boundaries suggests an interaction between content and sound-related information whenever a new semantic field has been opened.  相似文献   

12.
Using functional magnetic resonance imaging during a primed visual lexical decision task, we investigated the neural and functional mechanisms underlying modulations of semantic word processing through hypnotic suggestions aimed at altering lexical processing of primes. The priming task was to discriminate between target words and pseudowords presented 200 ms after the prime word which was semantically related or unrelated to the target. In a counterbalanced study design, each participant performed the task once at normal wakefulness and once after the administration of hypnotic suggestions to perceive the prime as a meaningless symbol of a foreign language. Neural correlates of priming were defined as significantly lower activations upon semantically related compared to unrelated trials. We found significant suggestive treatment-induced reductions in neural priming, albeit irrespective of the degree of suggestibility. Neural priming was attenuated upon suggestive treatment compared with normal wakefulness in brain regions supporting automatic (fusiform gyrus) and controlled semantic processing (superior and middle temporal gyri, pre- and postcentral gyri, and supplementary motor area). Hence, suggestions reduced semantic word processing by conjointly dampening both automatic and strategic semantic processes.  相似文献   

13.
According to the complementary learning systems (CLS) account of word learning, novel words are rapidly acquired (learning system 1), but slowly integrated into the mental lexicon (learning system 2). This two-step learning process has been shown to apply to novel word forms. In this study, we investigated whether novel word meanings are also gradually integrated after acquisition by measuring the extent to which newly learned words were able to prime semantically related words at two different time points. In addition, we investigated whether modality at study modulates this integration process. Sixty-four adult participants studied novel words together with written or spoken definitions. These words did not prime semantically related words directly following study, but did so after a 24-hour delay. This significant increase in the magnitude of the priming effect suggests that semantic integration occurs over time. Overall, words that were studied with a written definition showed larger priming effects, suggesting greater integration for the written study modality. Although the process of integration, reflected as an increase in the priming effect over time, did not significantly differ between study modalities, words studied with a written definition showed the most prominent positive effect after a 24-hour delay. Our data suggest that semantic integration requires time, and that studying in written format benefits semantic integration more than studying in spoken format. These findings are discussed in light of the CLS theory of word learning.  相似文献   

14.
The aim of the present study was to investigate the reading mechanisms in adults (27 subjects; mean age, 19.5 ± 0.8 [SD] years) with different levels of written text comprehension using fMRI. The main objective was to analyze the basic brain mechanisms of verbal stimuli perception with and without semantic component during reading discrimination tasks. The BOLD signal changes during WORD and PSEUDOWORD reading comparing to GAZE FIXATION state were estimated using both analysis of whole brain activation and ROIs (structures connected with the brain system providing reading) in two groups of subjects, “good” and “poor” readers. It was revealed that activations were higher in “poor” readers in lingual gyrus, SMG, STG compared to “good” readers during PSEUDOWORD reading. It was supposed, that the strategies of words and pseudowords recognition differed in two groups of readers: “good” readers identified words or pseudowords already at the stage of visual analysis of “word” structure and demonstrated attempts to decode pseudowords (i.e., language lexical zones were not activated); “poor” readers, apparently, tried to read pseudowords using the same strategy as for the words reading referring to the lexicon, and after failure identified pseudowords as meaningless concepts. In that case, activations of both lexical “language” zones and visual word form area (VWFA) were observed.  相似文献   

15.
This paper presents a new method of analysis by which structural similarities between brain data and linguistic data can be assessed at the semantic level. It shows how to measure the strength of these structural similarities and so determine the relatively better fit of the brain data with one semantic model over another. The first model is derived from WordNet, a lexical database of English compiled by language experts. The second is given by the corpus-based statistical technique of latent semantic analysis (LSA), which detects relations between words that are latent or hidden in text. The brain data are drawn from experiments in which statements about the geography of Europe were presented auditorily to participants who were asked to determine their truth or falsity while electroencephalographic (EEG) recordings were made. The theoretical framework for the analysis of the brain and semantic data derives from axiomatizations of theories such as the theory of differences in utility preference. Using brain-data samples from individual trials time-locked to the presentation of each word, ordinal relations of similarity differences are computed for the brain data and for the linguistic data. In each case those relations that are invariant with respect to the brain and linguistic data, and are correlated with sufficient statistical strength, amount to structural similarities between the brain and linguistic data. Results show that many more statistically significant structural similarities can be found between the brain data and the WordNet-derived data than the LSA-derived data. The work reported here is placed within the context of other recent studies of semantics and the brain. The main contribution of this paper is the new method it presents for the study of semantics and the brain and the focus it permits on networks of relations detected in brain data and represented by a semantic model.  相似文献   

16.
The quantitative modeling of semantic representations in the brain plays a key role in understanding the neural basis of semantic processing. Previous studies have demonstrated that word vectors, which were originally developed for use in the field of natural language processing, provide a powerful tool for such quantitative modeling. However, whether semantic representations in the brain revealed by the word vector-based models actually capture our perception of semantic information remains unclear, as there has been no study explicitly examining the behavioral correlates of the modeled brain semantic representations. To address this issue, we compared the semantic structure of nouns and adjectives in the brain estimated from word vector-based brain models with that evaluated from human behavior. The brain models were constructed using voxelwise modeling to predict the functional magnetic resonance imaging (fMRI) response to natural movies from semantic contents in each movie scene through a word vector space. The semantic dissimilarity of brain word representations was then evaluated using the brain models. Meanwhile, data on human behavior reflecting the perception of semantic dissimilarity between words were collected in psychological experiments. We found a significant correlation between brain model- and behavior-derived semantic dissimilarities of words. This finding suggests that semantic representations in the brain modeled via word vectors appropriately capture our perception of word meanings.  相似文献   

17.
The rate of lexical replacement estimates the diachronic stability of word forms on the basis of how frequently a proto-language word is replaced or retained in its daughter languages. Lexical replacement rate has been shown to be highly related to word class and word frequency. In this paper, we argue that content words and function words behave differently with respect to lexical replacement rate, and we show that semantic factors predict the lexical replacement rate of content words. For the 167 content items in the Swadesh list, data was gathered on the features of lexical replacement rate, word class, frequency, age of acquisition, synonyms, arousal, imageability and average mutual information, either from published databases or gathered from corpora and lexica. A linear regression model shows that, in addition to frequency, synonyms, senses and imageability are significantly related to the lexical replacement rate of content words–in particular the number of synonyms that a word has. The model shows no differences in lexical replacement rate between word classes, and outperforms a model with word class and word frequency predictors only.  相似文献   

18.
How pictures and words are stored and processed in the human brain constitute a long-standing question in cognitive psychology. Behavioral studies have yielded a large amount of data addressing this issue. Generally speaking, these data show that there are some interactions between the semantic processing of pictures and words. However, behavioral methods can provide only limited insight into certain findings. Fortunately, Event-Related Potential (ERP) provides on-line cues about the temporal nature of cognitive processes and contributes to the exploration of their neural substrates. ERPs have been used in order to better understand semantic processing of words and pictures. The main objective of this article is to offer an overview of the electrophysiologic bases of semantic processing of words and pictures. Studies presented in this article showed that the processing of words is associated with an N 400 component, whereas pictures elicited both N 300 and N 400 components. Topographical analysis of the N 400 distribution over the scalp is compatible with the idea that both image-mediated concrete words and pictures access an amodal semantic system. However, given the distinctive N 300 patterns, observed only during picture processing, it appears that picture and word processing rely upon distinct neuronal networks, even if they end up activating more or less similar semantic representations.  相似文献   

19.
Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could occur in a distributional mode, where information is gleaned from the distributional statistics (word co-occurrence etc.) of natural language. Such statistics are relevant to meaning because of the Distributional Principle that ‘words of similar meaning tend to occur in similar contexts’. Computational systems, such as Latent Semantic Analysis, have substantiated the viability of distributional learning of word meaning, by showing that semantic similarities between words can be accurately estimated from analysis of the distributional statistics of a natural language corpus. We consider whether appearance similarities can also be learnt in a distributional mode. As grounds for such a mode we advance the Appearance Hypothesis that ‘words with referents of similar appearance tend to occur in similar contexts’. We assess the viability of such learning by looking at the performance of a computer system that interpolates, on the basis of distributional and appearance similarity, from words that it has been explicitly taught the appearance of, in order to identify and name objects that it has not been taught about. Our experiment tests with a set of 660 simple concrete noun words. Appearance information on words is modelled using sets of images of examples of the word. Distributional similarity is computed from a standard natural language corpus. Our computation results support the viability of distributional learning of appearance.  相似文献   

20.
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号