首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study examined the fundamental question, whether verbal memory processing in hypnosis and in the waking state is mediated by a common neural system or by distinct cortical areas. Seven right-handed volunteers (25.4 years, sd 3.1) with high-hypnotic susceptibility scores were PET-scanned while encoding/retrieving word associations either in hypnosis or in the waking state. Word-pairs were visually presented and highly imaginable, but not semantically related (e.g. monkey-street). The presentation of pseudo-words served as a reference condition. An emission scan was recorded after each intravenous administration of O-15 water. Encoding under hypnosis was associated with more pronounced bilateral activations in the occipital cortex and the prefrontal areas as compared to learning in the waking state. During memory retrieval of word-pairs which had been previously learned under hypnosis, activations were found in the occipital lobe and the cerebellum. Under both experimental conditions precuneus and prefrontal cortex showed a consistent bilateral activation which was most distinct when the learning had taken place under hypnosis. In order to further analyze the effect of hypnosis on imagery-mediated learning, we administered sets of high-imagery word-pairs and sets of abstract words. In the first experimental condition word-pair associations were presented visually. In the second condition it was found that highly hypnotisable persons recalled significantly more high-imagery words under hypnosis as compared to low-hypnotisables both in the visual and auditory modality. Furthermore, high-imagery words were also better recalled by the highly hypnotisable subjects during the non-hypnotic condition. The memory effect was consistently present under both, immediate and delayed recall conditions. Taken together, the findings advance our understanding of the neural representation that underlies hypnosis and the neuropsychological correlates of hypnotic susceptibility.  相似文献   

2.
Memory traces for words are frequently conceptualized neurobiologically as networks of neurons interconnected via reciprocal links developed through associative learning in the process of language acquisition. Neurophysiological reflection of activation of such memory traces has been reported using the mismatch negativity brain potential (MMN), which demonstrates an enhanced response to meaningful words over meaningless items. This enhancement is believed to be generated by the activation of strongly intraconnected long-term memory circuits for words that can be automatically triggered by spoken linguistic input and that are absent for unfamiliar phonological stimuli. This conceptual framework critically predicts different amounts of activation depending on the strength of the word's lexical representation in the brain. The frequent use of words should lead to more strongly connected representations, whereas less frequent items would be associated with more weakly linked circuits. A word with higher frequency of occurrence in the subject's language should therefore lead to a more pronounced lexical MMN response than its low-frequency counterpart. We tested this prediction by comparing the event-related potentials elicited by low- and high-frequency words in a passive oddball paradigm; physical stimulus contrasts were kept identical. We found that, consistent with our prediction, presenting the high-frequency stimulus led to a significantly more pronounced MMN response relative to the low-frequency one, a finding that is highly similar to previously reported MMN enhancement to words over meaningless pseudowords. Furthermore, activation elicited by the higher-frequency word peaked earlier relative to low-frequency one, suggesting more rapid access to frequently used lexical entries. These results lend further support to the above view on word memory traces as strongly connected assemblies of neurons. The speed and magnitude of their activation appears to be linked to the strength of internal connections in a memory circuit, which is in turn determined by the everyday use of language elements.  相似文献   

3.
Cognitive science has a rich history of interest in the ways that languages represent abstract and concrete concepts (e.g., idea vs. dog). Until recently, this focus has centered largely on aspects of word meaning and semantic representation. However, recent corpora analyses have demonstrated that abstract and concrete words are also marked by phonological, orthographic, and morphological differences. These regularities in sound-meaning correspondence potentially allow listeners to infer certain aspects of semantics directly from word form. We investigated this relationship between form and meaning in a series of four experiments. In Experiments 1-2 we examined the role of metalinguistic knowledge in semantic decision by asking participants to make semantic judgments for aurally presented nonwords selectively varied by specific acoustic and phonetic parameters. Participants consistently associated increased word length and diminished wordlikeness with abstract concepts. In Experiment 3, participants completed a semantic decision task (i.e., abstract or concrete) for real words varied by length and concreteness. Participants were more likely to misclassify longer, inflected words (e.g., "apartment") as abstract and shorter uninflected abstract words (e.g., "fate") as concrete. In Experiment 4, we used a multiple regression to predict trial level naming data from a large corpus of nouns which revealed significant interaction effects between concreteness and word form. Together these results provide converging evidence for the hypothesis that listeners map sound to meaning through a non-arbitrary process using prior knowledge about statistical regularities in the surface forms of words.  相似文献   

4.

Background

Language impairment and behavioral symptoms are both common phenomena in dementia patients. In this study, we investigated the behavioral symptoms in dementia patients with different language backgrounds. Through this, we aimed to propose a possible connection between language and delusion.

Methods

We recruited 21 patients with Alzheimer’s disease (AD), according to the DSM-IV and NINCDS-ADRDA criteria, from the memory clinic of the Cardinal Tien Hospital in Taipei, Taiwan. They were classified into two groups: 11 multilinguals who could speak Japanese, Taiwanese and Mandarin Chinese, and 10 bilinguals who only spoke Taiwanese and Mandarin Chinese. There were no differences between age, education, disease duration, disease severity, environment and medical care between these two groups. Comprehensive neuropsychological examinations, including Clinical Dementia Rating (CDR), Mini-Mental Status Examination (MMSE), Cognitive Abilities Screening Instrument (CASI), Verbal fluency, Chinese version of the Boston naming test (BNT) and the Behavioral Pathology in Alzheimer’s Disease Rating Scale (BEHAVE-AD), were administered.

Results

The multilingual group showed worse results on the Boston naming test. Other neuropsychological tests, including the MMSE, CASI and Verbal fluency, were not significantly different. More delusions were noted in the multilingual group. Three pairs of subjects were identified for further examination of their differences. These three cases presented the typical scenario of how language misunderstanding may cause delusions in multilingual dementia patients. Consequently, more emotion and distorted ideas may be induced in the multilinguals compared with the MMSE-matched controls.

Conclusion

Inappropriate mixing of language or conflict between cognition and emotion may cause more delusions in these multilingual patients. This reminds us that delusion is not a pure biological outcome of brain degeneration. Although the cognitive performance was not significantly different between our groups, language may still affect their delusion.  相似文献   

5.
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.  相似文献   

6.
K Matsuno  J Lu 《Bio Systems》1989,22(4):301-304
The capacity of lexical decision-making in the brain conforms to the indefiniteness latent in natural languages. The average number of different meanings per word of a natural language is measured to be 2.805 +/- 0.005 irrespective of whether the language is Chinese, English or Japanese. If one can almost perfectly comprehend words and sentences written in a natural language in a context-dependent manner, the average number of different meanings per word would reduce to e (= 2.718281828459...), the base of natural or Napierian logarithms.  相似文献   

7.
Over the last million years, human language has emerged and evolved as a fundamental instrument of social communication and semiotic representation. People use language in part to convey emotional information, leading to the central and contingent questions: (1) What is the emotional spectrum of natural language? and (2) Are natural languages neutrally, positively, or negatively biased? Here, we report that the human-perceived positivity of over 10,000 of the most frequently used English words exhibits a clear positive bias. More deeply, we characterize and quantify distributions of word positivity for four large and distinct corpora, demonstrating that their form is broadly invariant with respect to frequency of word use.  相似文献   

8.
There is a growing interest in automatically building opinion lexicon from sources such as product reviews. Most of these methods depend on abundant external resources such as WordNet, which limits the applicability of these methods. Unsupervised or semi-supervised learning provides an optional solution to multilingual opinion lexicon extraction. However, the datasets are imbalanced in different languages. For some languages, the high-quality corpora are scarce or hard to obtain, which limits the research progress. To solve the above problems, we explore a mutual-reinforcement label propagation framework. First, for each language, a label propagation algorithm is applied to a word relation graph, and then a bilingual dictionary is used as a bridge to transfer information between two languages. A key advantage of this model is its ability to make two languages learn from each other and boost each other. The experimental results show that the proposed approach outperforms baseline significantly.  相似文献   

9.
The quantitative modeling of semantic representations in the brain plays a key role in understanding the neural basis of semantic processing. Previous studies have demonstrated that word vectors, which were originally developed for use in the field of natural language processing, provide a powerful tool for such quantitative modeling. However, whether semantic representations in the brain revealed by the word vector-based models actually capture our perception of semantic information remains unclear, as there has been no study explicitly examining the behavioral correlates of the modeled brain semantic representations. To address this issue, we compared the semantic structure of nouns and adjectives in the brain estimated from word vector-based brain models with that evaluated from human behavior. The brain models were constructed using voxelwise modeling to predict the functional magnetic resonance imaging (fMRI) response to natural movies from semantic contents in each movie scene through a word vector space. The semantic dissimilarity of brain word representations was then evaluated using the brain models. Meanwhile, data on human behavior reflecting the perception of semantic dissimilarity between words were collected in psychological experiments. We found a significant correlation between brain model- and behavior-derived semantic dissimilarities of words. This finding suggests that semantic representations in the brain modeled via word vectors appropriately capture our perception of word meanings.  相似文献   

10.
At the macrostructure level of language milestones, language acquisition follows a nearly identical course whether children grow up with one or with two languages. However, at the microstructure level, experimental research is revealing that the same proclivities and learning mechanisms that support language acquisition unfold somewhat differently in bilingual versus monolingual environments. This paper synthesizes recent findings in the area of early bilingualism by focusing on the question of how bilingual infants come to apply their phonetic sensitivities to word learning, as they must to learn minimal pair words (e.g. ‘cat’ and ‘mat’). To this end, the paper reviews antecedent achievements by bilinguals throughout infancy and early childhood in the following areas: language discrimination and separation, speech perception, phonetic and phonotactic development, word recognition, word learning and aspects of conceptual development that underlie word learning. Special consideration is given to the role of language dominance, and to the unique challenges to language acquisition posed by a bilingual environment.  相似文献   

11.

Objectives

Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern.

Experimental design

Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition.

Principal findings

The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca''s area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects.

Conclusions

Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.  相似文献   

12.
In regards to numerical cognition and working memory, it is an open question as to whether numbers are stored into and retrieved from a central abstract representation or from separate notation-specific representations. This study seeks to help answer this by utilizing the numeral modality effect (NME) in three experiments to explore how numbers are processed by the human brain. The participants were presented with numbers (1–9) as either Arabic digits or written number words (Arabic digits and dot matrices in Experiment 2) at the first (S1) and second (S2) stimuli. The participant’s task was to add the first two stimuli together and verify whether the answer (S3), presented simultaneously with S2, was correct. We hypothesized that if reaction time (RT) at S2/S3 depends on the modality of S1 then numbers are retrieved from modality specific memory stores. Indeed, RT depended on the modality of S1 whenever S2 was an Arabic digit which argues against the concept of numbers being stored and retrieved from a central, abstract representation.  相似文献   

13.
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants'' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents.  相似文献   

14.
Europe is home to a vast array of indigenous languages, not to mention numerous immigrant languages. European Union (EU) acknowledgement of “national” languages as official languages results in a privileged status for these languages vis-à-vis the minority languages with which they cohabit. This support prevents hegemony by a single language such as English, yet the EU simultaneously undermines these national languages domestically by promoting their minority language competitors. This paradox can only be understood by examining the developing model for European identity whereby identity is viewed as variable and multi-faceted, rooted in multilingual facility and the absence of a single, monolithic source of identity. If the project of creating a European identity is viewed as nation-building,it is central to consider how the issue of language diversity is addressed at the European level. The paper begins by discussing the concept of national identity and the central role that language plays in its determination, as well as what modern conceptions of language planning bring to this process. After exploring the European language terrain, the paper considers whether the EU can even be said to have a language policy. The discussion focuses on multilingual education programs, the treatment of minority languages, and the issue of languages spoken by immigrant populations. Having presented these conceptual tools and policy surveys, an analytical framework is introduced that situates the nation-building process in relation to the creation of a common European identity.  相似文献   

15.
Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.

Little is known about the neural processes involved in planning to speak. This study uses eye-tracking and EEG to show that speakers prepare sentence structures in different ways and rely on alpha and theta oscillations differently when planning sentences with and without agent case marking, challenging theories on how production and comprehension affect language evolution.  相似文献   

16.
We present data from 17 languages on the frequency with which a common set of words is used in everyday language. The languages are drawn from six language families representing 65 per cent of the world's 7000 languages. Our data were collected from linguistic corpora that record frequencies of use for the 200 meanings in the widely used Swadesh fundamental vocabulary. Our interest is to assess evidence for shared patterns of language use around the world, and for the relationship of language use to rates of lexical replacement, defined as the replacement of a word by a new unrelated or non-cognate word. Frequencies of use for words in the Swadesh list range from just a few per million words of speech to 191 000 or more. The average inter-correlation among languages in the frequency of use across the 200 words is 0.73 (p < 0.0001). The first principal component of these data accounts for 70 per cent of the variance in frequency of use. Elsewhere, we have shown that frequently used words in the Indo-European languages tend to be more conserved, and that this relationship holds separately for different parts of speech. A regression model combining the principal factor loadings derived from the worldwide sample along with their part of speech predicts 46 per cent of the variance in the rates of lexical replacement in the Indo-European languages. This suggests that Indo-European lexical replacement rates might be broadly representative of worldwide rates of change. Evidence for this speculation comes from using the same factor loadings and part-of-speech categories to predict a word's position in a list of 110 words ranked from slowest to most rapidly evolving among 14 of the world's language families. This regression model accounts for 30 per cent of the variance. Our results point to a remarkable regularity in the way that human speakers use language, and hint that the words for a shared set of meanings have been slowly evolving and others more rapidly evolving throughout human history.  相似文献   

17.
Evolution might have set the basic foundations for abstract mental representation long ago. Because of language, mental abilities would have reached different degrees of sophistication in mammals and in humans but would be, essentially, of the same nature. Thus, humans and animals might rely on the same basic mechanisms that could be masked in humans by the use of sophisticated strategies. In this paper, monkey and human abilities are compared in a variety of perceptual tasks including visual categorization to assess behavioural similarities and dissimilarities, and to determine the level of abstraction of monkeys' mental representations. The question of how these abstract representations might be encoded in the brain is then addressed. A comparative study of the neural processing underlying abstract cognitive operations in animals and humans might help to understand when abstraction emerged in the phylogenetic scale, and how it increased in complexity.  相似文献   

18.
This article describes the discovery of a set of biologically-driven semantic dimensions underlying the neural representation of concrete nouns, and then demonstrates how a resulting theory of noun representation can be used to identify simple thoughts through their fMRI patterns. We use factor analysis of fMRI brain imaging data to reveal the biological representation of individual concrete nouns like apple, in the absence of any pictorial stimuli. From this analysis emerge three main semantic factors underpinning the neural representation of nouns naming physical objects, which we label manipulation, shelter, and eating. Each factor is neurally represented in 3–4 different brain locations that correspond to a cortical network that co-activates in non-linguistic tasks, such as tool use pantomime for the manipulation factor. Several converging methods, such as the use of behavioral ratings of word meaning and text corpus characteristics, provide independent evidence of the centrality of these factors to the representations. The factors are then used with machine learning classifier techniques to show that the fMRI-measured brain representation of an individual concrete noun like apple can be identified with good accuracy from among 60 candidate words, using only the fMRI activity in the 16 locations associated with these factors. To further demonstrate the generativity of the proposed account, a theory-based model is developed to predict the brain activation patterns for words to which the algorithm has not been previously exposed. The methods, findings, and theory constitute a new approach of using brain activity for understanding how object concepts are represented in the mind.  相似文献   

19.
Traditionally, language processing has been attributed to a separate system in the brain, which supposedly works in an abstract propositional manner. However, there is increasing evidence suggesting that language processing is strongly interrelated with sensorimotor processing. Evidence for such an interrelation is typically drawn from interactions between language and perception or action. In the current study, the effect of words that refer to entities in the world with a typical location (e.g., sun, worm) on the planning of saccadic eye movements was investigated. Participants had to perform a lexical decision task on visually presented words and non-words. They responded by moving their eyes to a target in an upper (lower) screen position for a word (non-word) or vice versa. Eye movements were faster to locations compatible with the word''s referent in the real world. These results provide evidence for the importance of linguistic stimuli in directing eye movements, even if the words do not directly transfer directional information.  相似文献   

20.
While embodied approaches of cognition have proved to be successful in explaining concrete concepts and words, they have more difficulties in accounting for abstract concepts and words, and several proposals have been put forward. This work aims to test the Words As Tools proposal, according to which both abstract and concrete concepts are grounded in perception, action and emotional systems, but linguistic information is more important for abstract than for concrete concept representation, due to the different ways they are acquired: while for the acquisition of the latter linguistic information might play a role, for the acquisition of the former it is instead crucial. We investigated the acquisition of concrete and abstract concepts and words, and verified its impact on conceptual representation. In Experiment 1, participants explored and categorized novel concrete and abstract entities, and were taught a novel label for each category. Later they performed a categorical recognition task and an image-word matching task to verify a) whether and how the introduction of language changed the previously formed categories, b) whether language had a major weight for abstract than for concrete words representation, and c) whether this difference had consequences on bodily responses. The results confirm that, even though both concrete and abstract concepts are grounded, language facilitates the acquisition of the latter and plays a major role in their representation, resulting in faster responses with the mouth, typically associated with language production. Experiment 2 was a rating test aiming to verify whether the findings of Experiment 1 were simply due to heterogeneity, i.e. to the fact that the members of abstract categories were more heterogeneous than those of concrete categories. The results confirmed the effectiveness of our operationalization, showing that abstract concepts are more associated with the mouth and concrete ones with the hand, independently from heterogeneity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号