首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The rate of lexical replacement estimates the diachronic stability of word forms on the basis of how frequently a proto-language word is replaced or retained in its daughter languages. Lexical replacement rate has been shown to be highly related to word class and word frequency. In this paper, we argue that content words and function words behave differently with respect to lexical replacement rate, and we show that semantic factors predict the lexical replacement rate of content words. For the 167 content items in the Swadesh list, data was gathered on the features of lexical replacement rate, word class, frequency, age of acquisition, synonyms, arousal, imageability and average mutual information, either from published databases or gathered from corpora and lexica. A linear regression model shows that, in addition to frequency, synonyms, senses and imageability are significantly related to the lexical replacement rate of content words–in particular the number of synonyms that a word has. The model shows no differences in lexical replacement rate between word classes, and outperforms a model with word class and word frequency predictors only.  相似文献   

2.
The lexical matrix is an integral part of the human language system. It provides the link between word form and word meaning. A simple lexical matrix is also at the center of any animal communication system, where it defines the associations between form and meaning of animal signals. We study the evolution and population dynamics of the lexical matrix. We assume that children learn the lexical matrix of their parents. This learning process is subject to mistakes: (i) children may not acquire all lexical items of their parents (incomplete learning); and (ii) children might acquire associations between word forms and word meanings that differ from their parents’ lexical items (incorrect learning). We derive an analytic framework that deals with incomplete learning. We calculate the maximum error rate that is compatible with a population maintaining a coherent lexical matrix of a given size. We calculate the equilibrium distribution of the number of lexical items known to individuals. Our analytic investigations are supplemented by numerical simulations that describe both incomplete and incorrect learning, and other extensions.  相似文献   

3.
The initial process of identifying words from spoken language and the detection of more subtle regularities underlying their structure are mandatory processes for language acquisition. Little is known about the cognitive mechanisms that allow us to extract these two types of information and their specific time-course of acquisition following initial contact with a new language. We report time-related electrophysiological changes that occurred while participants learned an artificial language. These changes strongly correlated with the discovery of the structural rules embedded in the words. These changes were clearly different from those related to word learning and occurred during the first minutes of exposition. There is a functional distinction in the nature of the electrophysiological signals during acquisition: an increase in negativity (N400) in the central electrodes is related to word-learning and development of a frontal positivity (P2) is related to rule-learning. In addition, the results of an online implicit and a post-learning test indicate that, once the rules of the language have been acquired, new words following the rule are processed as words of the language. By contrast, new words violating the rule induce syntax-related electrophysiological responses when inserted online in the stream (an early frontal negativity followed by a late posterior positivity) and clear lexical effects when presented in isolation (N400 modulation). The present study provides direct evidence suggesting that the mechanisms to extract words and structural dependencies from continuous speech are functionally segregated. When these mechanisms are engaged, the electrophysiological marker associated with rule-learning appears very quickly, during the earliest phases of exposition to a new language.  相似文献   

4.
Filik R  Barber E 《PloS one》2011,6(10):e25782
While reading silently, we often have the subjective experience of inner speech. However, there is currently little evidence regarding whether this inner voice resembles our own voice while we are speaking out loud. To investigate this issue, we compared reading behaviour of Northern and Southern English participants who have differing pronunciations for words like 'glass', in which the vowel duration is short in a Northern accent and long in a Southern accent. Participants' eye movements were monitored while they silently read limericks in which the end words of the first two lines (e.g., glass/class) would be pronounced differently by Northern and Southern participants. The final word of the limerick (e.g., mass/sparse) then either did or did not rhyme, depending on the reader's accent. Results showed disruption to eye movement behaviour when the final word did not rhyme, determined by the reader's accent, suggesting that inner speech resembles our own voice.  相似文献   

5.
Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime.  相似文献   

6.
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.  相似文献   

7.
The aim of this study was to investigate the effect of lexical context on the latency and the amplitude of the mismatch negativity (MMN) brain potential caused by perception of pseudowords. The eventrelated potentials were recorded according to the multideviant passive odd-ball paradigm by using only pseudowords (control condition) or pseudowords with Russian words with different lexical frequencies (lexical context). It was found that different MMN patterns were generated when the same pseudoword was presented in different contexts. Pseudoword presentation in a context with other pseudowords resulted in a relatively small amplitude and large latency of MMN. If the same pseudoword was presented in a context with words, it induced significantly increased amplitude and reduced latency of MMN varying in the range of 100–200 ms. It is supposed that the pseudoword presented in a context with words is perceived as conceptually different stimulus, which leads to a significant increase in MMN. Moreover, our findings support the hypothesis that MMN is affected by lexical frequency. In particular, presentation of a high-frequency word induced a significantly more pronounced MMN response than a low-frequency one. High-frequency words also evoked earlier response, which indicates more rapid access to a frequently used lexical entry. More frequent use of certain words results in stronger internal connections in the corresponding memory circuit, which in turn is determined by the lexical context. We hypothesize that different intensities of activation depends on the strength of lexical representation.  相似文献   

8.
In this paper the analysis of the human acoustic communication channel is presented as a potentially bio-anthropological subject of interest. After a crude survey concerning some important aspects of speech and expressive behaviour a hypothesis is outlined saying that the verbal and the nonverbal content of the speech signal are not just transferred side by side; rather there exist close interconnections between the linguistic and the expressive structures, which is shown regarding the speech melody; the 'cultural' language code makes use of a predominantly 'non-cultural' code of vocalisations for the purpose of linguistic disambiguation and speeding up the communication process. The complementarity of these two codes, their principal independence of each other as well as their different cerebral representation contribute to the high efficiency of speech as a communication tool.  相似文献   

9.
This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no 'magic moment' when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.  相似文献   

10.
Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field.  相似文献   

11.
To what extent do phonological codes constrain orthographic output in handwritten production? We investigated how phonological codes constrain the selection of orthographic codes via sublexical and lexical routes in Chinese written production. Participants wrote down picture names in a picture-naming task in Experiment 1or response words in a symbol—word associative writing task in Experiment 2. A sublexical phonological property of picture names (phonetic regularity: regular vs. irregular) in Experiment 1and a lexical phonological property of response words (homophone density: dense vs. sparse) in Experiment 2, as well as word frequency of the targets in both experiments, were manipulated. A facilitatory effect of word frequency was found in both experiments, in which words with high frequency were produced faster than those with low frequency. More importantly, we observed an inhibitory phonetic regularity effect, in which low-frequency picture names with regular first characters were slower to write than those with irregular ones, and an inhibitory homophone density effect, in which characters with dense homophone density were produced more slowly than those with sparse homophone density. Results suggested that phonological codes constrained handwritten production via lexical and sublexical routes.  相似文献   

12.
In this paper we present a novel theory of the cognitive and neural processes by which adults learn new spoken words. This proposal builds on neurocomputational accounts of lexical processing and spoken word recognition and complementary learning systems (CLS) models of memory. We review evidence from behavioural studies of word learning that, consistent with the CLS account, show two stages of lexical acquisition: rapid initial familiarization followed by slow lexical consolidation. These stages map broadly onto two systems involved in different aspects of word learning: (i) rapid, initial acquisition supported by medial temporal and hippocampal learning, (ii) slower neocortical learning achieved by offline consolidation of previously acquired information. We review behavioural and neuroscientific evidence consistent with this account, including a meta-analysis of PET and functional Magnetic Resonance Imaging (fMRI) studies that contrast responses to spoken words and pseudowords. From this meta-analysis we derive predictions for the location and direction of cortical response changes following familiarization with pseudowords. This allows us to assess evidence for learning-induced changes that convert pseudoword responses into real word responses. Results provide unique support for the CLS account since hippocampal responses change during initial learning, whereas cortical responses to pseudowords only become word-like if overnight consolidation follows initial learning.  相似文献   

13.
Humans can recognize spoken words with unmatched speed and accuracy. Hearing the initial portion of a word such as "formu…" is sufficient for the brain to identify "formula" from the thousands of other words that partially match. Two alternative computational accounts propose that partially matching words (1) inhibit each other until a single word is selected ("formula" inhibits "formal" by lexical competition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error is minimal after sequences like "formu…"). To distinguish these theories we taught participants novel words (e.g., "formubo") that sound like existing words ("formula") on two successive days. Computational simulations show that knowing "formubo" increases lexical competition when hearing "formu…", but reduces segment prediction error. Conversely, when the sounds in "formula" and "formubo" diverge, the reverse is observed. The time course of magnetoencephalographic brain responses in the superior temporal gyrus (STG) is uniquely consistent with a segment prediction account. We propose a predictive coding model of spoken word recognition in which STG neurons represent the difference between predicted and heard speech sounds. This prediction error signal explains the efficiency of human word recognition and simulates neural responses in auditory regions.  相似文献   

14.
This study tested the effect of acute exposure to a commercialair freshener, derived from fragrant botanical extracts, atan average concentration of 3.16 mg/m3 total volatile organiccompounds on the lexical decision performance of 28 naive participants.Participants attended two 18-min sessions on separate days andwere continuously exposed to the fragrance in either the first(F/NF) or second (NF/F) session. Participants were not instructedabout the fragrance. Exposure to the fragrance did not affecthigh-frequency word recognition. However, there was an orderof administration effect for low-frequency word recognitionaccuracy. When the fragrance was administered first before theno-odor control condition, it did not affect accuracy, but whenit was administered second after the control condition, it significantlydecreased low-frequency word recognition accuracy. Reactiontimes to low-frequency words were significantly slower thanthose for high-frequency words, but no effect of either fragranceor order of administration on reaction times was found. Thepresence of fragrance in the second session apparently servedas a distraction that impaired lexical task performance accuracy.The introduction of fragrances into buildings may not necessarilyfacilitate all aspects of work performance as anticipated.  相似文献   

15.
A fundamental issue in cognitive neuroscience is the existence of two major, sub-lexical and lexical, reading processes and their possible segregation in the left posterior perisylvian cortex. Using cortical electrostimulation mapping, we identified the cortical areas involved on reading either orthographically irregular words (lexical, “direct” process) or pronounceable pseudowords (sublexical, “indirect” process) in 14 right-handed neurosurgical patients while video-recording behavioral effects. Intraoperative neuronavigation system and Montreal Neurological Institute (MNI) stereotactic coordinates were used to identify the localization of stimulation sites. Fifty-one reading interference areas were found that affected either words (14 areas), or pseudo-words (11 areas), or both (26 areas). Forty-one (80%) corresponded to the impairment of the phonological level of reading processes. Reading processes involved discrete, highly localized perisylvian cortical areas with individual variability. MNI coordinates throughout the group exhibited a clear segregation according to the tested reading route; specific pseudo-word reading interferences were concentrated in a restricted inferior and anterior subpart of the left supramarginal gyrus (barycentre x = −68.1; y = −25.9; z = 30.2; Brodmann’s area 40) while specific word reading areas were located almost exclusively alongside the left superior temporal gyrus. Although half of the reading interferences found were nonspecific, the finding of specific lexical or sublexical interferences is new evidence that lexical and sublexical processes of reading could be partially supported by distinct cortical sub-regions despite their anatomical proximity. These data are in line with many brain activation studies that showed that left superior temporal and inferior parietal regions had a crucial role respectively in word and pseudoword reading and were core regions for dyslexia.  相似文献   

16.
Wang XD  Gu F  He K  Chen LH  Chen L 《PloS one》2012,7(1):e30027

Background

Extraction of linguistically relevant auditory features is critical for speech comprehension in complex auditory environments, in which the relationships between acoustic stimuli are often abstract and constant while the stimuli per se are varying. These relationships are referred to as the abstract auditory rule in speech and have been investigated for their underlying neural mechanisms at an attentive stage. However, the issue of whether or not there is a sensory intelligence that enables one to automatically encode abstract auditory rules in speech at a preattentive stage has not yet been thoroughly addressed.

Methodology/Principal Findings

We chose Chinese lexical tones for the current study because they help to define word meaning and hence facilitate the fabrication of an abstract auditory rule in a speech sound stream. We continuously presented native Chinese speakers with Chinese vowels differing in formant, intensity, and level of pitch to construct a complex and varying auditory stream. In this stream, most of the sounds shared flat lexical tones to form an embedded abstract auditory rule. Occasionally the rule was randomly violated by those with a rising or falling lexical tone. The results showed that the violation of the abstract auditory rule of lexical tones evoked a robust preattentive auditory response, as revealed by whole-head electrical recordings of the mismatch negativity (MMN), though none of the subjects acquired explicit knowledge of the rule or became aware of the violation.

Conclusions/Significance

Our results demonstrate that there is an auditory sensory intelligence in the perception of Chinese lexical tones. The existence of this intelligence suggests that the humans can automatically extract abstract auditory rules in speech at a preattentive stage to ensure speech communication in complex and noisy auditory environments without drawing on conscious resources.  相似文献   

17.

Background

Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials.

Methodology

Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets.

Conclusions

Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required.  相似文献   

18.
Memory traces for words are frequently conceptualized neurobiologically as networks of neurons interconnected via reciprocal links developed through associative learning in the process of language acquisition. Neurophysiological reflection of activation of such memory traces has been reported using the mismatch negativity brain potential (MMN), which demonstrates an enhanced response to meaningful words over meaningless items. This enhancement is believed to be generated by the activation of strongly intraconnected long-term memory circuits for words that can be automatically triggered by spoken linguistic input and that are absent for unfamiliar phonological stimuli. This conceptual framework critically predicts different amounts of activation depending on the strength of the word's lexical representation in the brain. The frequent use of words should lead to more strongly connected representations, whereas less frequent items would be associated with more weakly linked circuits. A word with higher frequency of occurrence in the subject's language should therefore lead to a more pronounced lexical MMN response than its low-frequency counterpart. We tested this prediction by comparing the event-related potentials elicited by low- and high-frequency words in a passive oddball paradigm; physical stimulus contrasts were kept identical. We found that, consistent with our prediction, presenting the high-frequency stimulus led to a significantly more pronounced MMN response relative to the low-frequency one, a finding that is highly similar to previously reported MMN enhancement to words over meaningless pseudowords. Furthermore, activation elicited by the higher-frequency word peaked earlier relative to low-frequency one, suggesting more rapid access to frequently used lexical entries. These results lend further support to the above view on word memory traces as strongly connected assemblies of neurons. The speed and magnitude of their activation appears to be linked to the strength of internal connections in a memory circuit, which is in turn determined by the everyday use of language elements.  相似文献   

19.
20.
When and how do infants develop a semantic system of words that are related to each other? We investigated word–word associations in early lexical development using an adaptation of the inter-modal preferential looking task where word pairs (as opposed to single target words) were used to direct infants’ attention towards a target picture. Two words (prime and target) were presented in quick succession after which infants were presented with a picture pair (target and distracter). Prime–target word pairs were either semantically and associatively related or unrelated; the targets were either named or unnamed. Experiment 1 demonstrated a lexical–semantic priming effect for 21-month olds but not for 18-month olds: unrelated prime words interfered with linguistic target identification for 21-month olds. Follow-up experiments confirmed the interfering effects of unrelated prime words and identified the existence of repetition priming effects as young as 18 months of age. The results of these experiments indicate that infants have begun to develop semantic–associative links between lexical items as early as 21 months of age.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号