首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.  相似文献   

2.
3.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.  相似文献   

4.
To what extent do phonological codes constrain orthographic output in handwritten production? We investigated how phonological codes constrain the selection of orthographic codes via sublexical and lexical routes in Chinese written production. Participants wrote down picture names in a picture-naming task in Experiment 1or response words in a symbol—word associative writing task in Experiment 2. A sublexical phonological property of picture names (phonetic regularity: regular vs. irregular) in Experiment 1and a lexical phonological property of response words (homophone density: dense vs. sparse) in Experiment 2, as well as word frequency of the targets in both experiments, were manipulated. A facilitatory effect of word frequency was found in both experiments, in which words with high frequency were produced faster than those with low frequency. More importantly, we observed an inhibitory phonetic regularity effect, in which low-frequency picture names with regular first characters were slower to write than those with irregular ones, and an inhibitory homophone density effect, in which characters with dense homophone density were produced more slowly than those with sparse homophone density. Results suggested that phonological codes constrained handwritten production via lexical and sublexical routes.  相似文献   

5.

Objective

Word finding depends on the processing of semantic and lexical information, and it involves an intermediate level for mapping semantic-to-lexical information which also subserves lexical-to-semantic mapping during word comprehension. However, the brain regions implementing these components are still controversial and have not been clarified via a comprehensive lesion model encompassing the whole range of language-related cortices. Primary progressive aphasia (PPA), for which anomia is thought to be the most common sign, provides such a model, but the exploration of cortical areas impacting naming in its three main variants and the underlying processing mechanisms is still lacking.

Methods

We addressed this double issue, related to language structure and PPA, with thirty patients (11 semantic, 12 logopenic, 7 agrammatic variant) using a picture-naming task and voxel-based morphometry for anatomo-functional correlation. First, we analyzed correlations for each of the three variants to identify the regions impacting naming in PPA and to disentangle the core regions of word finding. We then combined the three variants and correlation analyses for naming (semantic-to-lexical mapping) and single-word comprehension (lexical-to-semantic mapping), predicting an overlap zone corresponding to a bidirectional lexical-semantic hub.

Results and Conclusions

Our results showed that superior portions of the left temporal pole and left posterior temporal cortices impact semantic and lexical naming mechanisms in semantic and logopenic PPA, respectively. In agrammatic PPA naming deficits were rare, and did not correlate with any cortical region. Combined analyses revealed a cortical overlap zone in superior/middle mid-temporal cortices, distinct from the two former regions, impacting bidirectional binding of lexical and semantic information. Altogether, our findings indicate that lexical/semantic word processing depends on an anterior-posterior axis within lateral-temporal cortices, including an anatomically intermediate hub dedicated to lexical-semantic integration. Within this axis our data reveal the underpinnings of anomia in the PPA variants, which is of relevance for both diagnosis and future therapy strategies.  相似文献   

6.
This paper presents an experiment that explored the role of domain–general inhibitory control on language switching. Reaction times (RTs) and event–related brain potentials (ERPs) were recorded when low–proficient bilinguals with high and low inhibitory control (IC) switched between overt picture naming in both their L1 and L2. Results showed that the language switch costs of bilinguals with high–IC were symmetrical, while that of bilinguals with low–IC were not. The N2 component failed to show a significant interaction between group, language and task, indicating that inhibition may not comes into play during the language task schema competition phase. The late positive component (LPC), however, showed larger amplitudes for L2 repeat and switch trials than for L1 trials in the high–IC group, indicating that inhibition may play a key role during the lexical response selection phase. These findings suggest that domain–general inhibitory control plays an important role in modulating language switch costs and its influence can be specified in lexical selection phase.  相似文献   

7.
Higher N170 amplitudes to words and to faces were recently reported for faster readers of German. Since the shallow German orthography allows phonological recoding of single letters, the reported speed advantages might have their origin in especially well-developed visual processing skills of faster readers. In contrast to German, adult readers of Hebrew are forced to process letter chunks up to whole words. This dependence on more complex visual processing might have created ceiling effects for this skill. Therefore, the current study examined whether also in the deep Hebrew orthography visual processing skills as reflected by N170 amplitudes explain reading speed differences. Forty university students, native speakers of Hebrew without reading impairments, accomplished a lexical decision task (i.e., deciding whether a visually presented stimulus represents a real or a pseudo word) and a face decision task (i.e., deciding whether a face was presented complete or with missing facial features) while their electroencephalogram was recorded from 64 scalp positions. In both tasks stronger event related potentials (ERPs) were observed for faster readers in time windows at about 200 ms. Unlike in previous studies, ERP waveforms in relevant time windows did not correspond to N170 scalp topographies. The results support the notion of visual processing ability as an orthography independent marker of reading proficiency, which advances our understanding about regular and impaired reading development.  相似文献   

8.
Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field.  相似文献   

9.
Primary progressive aphasia (PPA) is a neurodegenerative syndrome characterized by an insidious onset and gradual progression of deficits that can involve any aspect of language, including word finding, object naming, fluency, syntax, phonology and word comprehension. The initial symptoms occur in the absence of major deficits in other cognitive domains, including episodic memory, visuospatial abilities and visuoconstruction. According to recent diagnostic guidelines, PPA is typically divided into three variants: nonfluent variant PPA (also termed progressive nonfluent aphasia), semantic variant PPA (also termed semantic dementia) and logopenic/phonological variant PPA (also termed logopenic progressive aphasia). The paper describes a 79-yr old man, who presented with normal motor speech and production rate, impaired single word retrieval and phonemic errors in spontaneous speech and confrontational naming. Confrontation naming was strongly affected by lexical frequency. He was impaired on repetition of sentences and phrases. Reading was intact for regularly spelled words but not for irregular words (surface dyslexia). Comprehension was spared at the single word level, but impaired for complex sentences. He performed within the normal range on the Dutch equivalent of the Pyramids and Palm Trees (PPT) Pictures Test, indicating that semantic processing was preserved. There was, however, a slight deficiency on the PPT Words Test, which appeals to semantic knowledge of verbal associations. His core deficit was interpreted as an inability to retrieve stored lexical-phonological information for spoken word production in spontaneous speech, confrontation naming, repetition and reading aloud.  相似文献   

10.
An essential step to create phonology according to the language production model by Levelt, Roelofs and Meyer is to assemble phonemes into a metrical frame. However, recently, it has been proposed that different languages may rely on different grain sizes of phonological units to construct phonology. For instance, it has been proposed that, instead of phonemes, Mandarin Chinese uses syllables and Japanese uses moras to fill the metrical frame. In this study, we used a masked priming-naming task to investigate how bilinguals assemble their phonology for each language when the two languages differ in grain size. Highly proficient Mandarin Chinese-English bilinguals showed a significant masked onset priming effect in English (L2), and a significant masked syllabic priming effect in Mandarin Chinese (L1). These results suggest that their proximate unit is phonemic in L2 (English), and that bilinguals may use different phonological units depending on the language that is being processed. Additionally, under some conditions, a significant sub-syllabic priming effect was observed even in Mandarin Chinese, which indicates that L2 phonology exerts influences on L1 target processing as a consequence of having a good command of English.  相似文献   

11.
Traditionally, language processing has been attributed to a separate system in the brain, which supposedly works in an abstract propositional manner. However, there is increasing evidence suggesting that language processing is strongly interrelated with sensorimotor processing. Evidence for such an interrelation is typically drawn from interactions between language and perception or action. In the current study, the effect of words that refer to entities in the world with a typical location (e.g., sun, worm) on the planning of saccadic eye movements was investigated. Participants had to perform a lexical decision task on visually presented words and non-words. They responded by moving their eyes to a target in an upper (lower) screen position for a word (non-word) or vice versa. Eye movements were faster to locations compatible with the word''s referent in the real world. These results provide evidence for the importance of linguistic stimuli in directing eye movements, even if the words do not directly transfer directional information.  相似文献   

12.
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.  相似文献   

13.
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.  相似文献   

14.
Cognitive science has a rich history of interest in the ways that languages represent abstract and concrete concepts (e.g., idea vs. dog). Until recently, this focus has centered largely on aspects of word meaning and semantic representation. However, recent corpora analyses have demonstrated that abstract and concrete words are also marked by phonological, orthographic, and morphological differences. These regularities in sound-meaning correspondence potentially allow listeners to infer certain aspects of semantics directly from word form. We investigated this relationship between form and meaning in a series of four experiments. In Experiments 1-2 we examined the role of metalinguistic knowledge in semantic decision by asking participants to make semantic judgments for aurally presented nonwords selectively varied by specific acoustic and phonetic parameters. Participants consistently associated increased word length and diminished wordlikeness with abstract concepts. In Experiment 3, participants completed a semantic decision task (i.e., abstract or concrete) for real words varied by length and concreteness. Participants were more likely to misclassify longer, inflected words (e.g., "apartment") as abstract and shorter uninflected abstract words (e.g., "fate") as concrete. In Experiment 4, we used a multiple regression to predict trial level naming data from a large corpus of nouns which revealed significant interaction effects between concreteness and word form. Together these results provide converging evidence for the hypothesis that listeners map sound to meaning through a non-arbitrary process using prior knowledge about statistical regularities in the surface forms of words.  相似文献   

15.
Abstrct  An important issue in visual word comprehension literature is whether or not semantic access is mediated by phonological processing. In this paper, we present a Chinese individual, YGA, who provides converging evidence to directly address this issue. YGA has sustained damage to the left posterior superior and middle temporal lobe, and shows difficulty in orally name pictures and reading printed words aloud. He makes phonological errors on these tasks and also semantic errors on picture naming, indicating a deficit at accessing the phonological representations for output. However, he is intact at understanding the meaning of visually presented words. Such a profile challenges the hypothesis that semantic access in reading is phonologically mediated and provides further evidence for the universal principle of direct semantic access in reading. Supported by Grants PCSIRT (IRT0710), National Natural Science Foundation of China (Grant Nos. 30770715, 30700224), Beijing Natural Science Foundation (Grant No.7082051).  相似文献   

16.
Differences in the neural processing of six categories of pictorial stimuli (maps, body parts, objects, animals, famous faces and colours) were investigated using positron emission tomography. Stimuli were presented either with or without the written name of the picture, thereby creating a naming condition and a reading condition. As predicted, naming increased the demands on lexical processes. This was demonstrated by activation of the left temporal lobe in a posterior region associated with name retrieval in several previous studies. This lexical effect was common to all meaningful stimuli and no category-specific effects were observed for naming relative to reading. Nevertheless, category differences were found when naming and reading were considered together. Stimuli with greater visual complexity (animals, faces and maps) enhanced activation in the left extrastriate cortex. Furthermore, map recognition, which requires greater spatio-topographical processing, also activated the right occipito-parietal and parahippocampal cortices. These effects in the visuo-spatial regions emphasize inevitable differences in the perceptual properties of pictorial stimuli. In the semantic temporal regions, famous faces and objects enhanced activation in the left antero-lateral and postero-lateral cortices, respectively. In addition, we showed that the same posterior left temporal region is also activated by body parts. We conclude that category-specific brain activations depend more on differential processing at the perceptual and semantic levels rather than at the lexical retrieval level.  相似文献   

17.
Early setting of grammatical processing in the bilingual brain   总被引:27,自引:0,他引:27  
The existence of a "critical period" for language acquisition is controversial. Bilingual subjects with variable age of acquisition (AOA) and proficiency level (PL) constitute a suitable model to study this issue. We used functional magnetic resonance imaging to investigate the effects of AOA and PL on neural correlates of grammatical and semantic judgments in Italian-German bilinguals who learned the second language at different ages and had different proficiency levels. While the pattern of brain activity for semantic judgment was largely dependent on PL, AOA mainly affected the cortical representation of grammatical processes. These findings support the view that both AOA and PL affect the neural substrates of second language processing, with a differential effect on grammar and semantics.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号