首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.  相似文献   

2.
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.  相似文献   

3.
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.  相似文献   

4.
5.
In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain''s core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.  相似文献   

6.
Spatiotemporal dynamics of modality-specific and supramodal word processing   总被引:13,自引:0,他引:13  
The ability of written and spoken words to access the same semantic meaning provides a test case for the multimodal convergence of information from sensory to associative areas. Using anatomically constrained magnetoencephalography (aMEG), the present study investigated the stages of word comprehension in real time in the auditory and visual modalities, as subjects participated in a semantic judgment task. Activity spread from the primary sensory areas along the respective ventral processing streams and converged in anterior temporal and inferior prefrontal regions, primarily on the left at around 400 ms. Comparison of response patterns during repetition priming between the two modalities suggest that they are initiated by modality-specific memory systems, but that they are eventually elaborated mainly in supramodal areas.  相似文献   

7.
The initial process of identifying words from spoken language and the detection of more subtle regularities underlying their structure are mandatory processes for language acquisition. Little is known about the cognitive mechanisms that allow us to extract these two types of information and their specific time-course of acquisition following initial contact with a new language. We report time-related electrophysiological changes that occurred while participants learned an artificial language. These changes strongly correlated with the discovery of the structural rules embedded in the words. These changes were clearly different from those related to word learning and occurred during the first minutes of exposition. There is a functional distinction in the nature of the electrophysiological signals during acquisition: an increase in negativity (N400) in the central electrodes is related to word-learning and development of a frontal positivity (P2) is related to rule-learning. In addition, the results of an online implicit and a post-learning test indicate that, once the rules of the language have been acquired, new words following the rule are processed as words of the language. By contrast, new words violating the rule induce syntax-related electrophysiological responses when inserted online in the stream (an early frontal negativity followed by a late posterior positivity) and clear lexical effects when presented in isolation (N400 modulation). The present study provides direct evidence suggesting that the mechanisms to extract words and structural dependencies from continuous speech are functionally segregated. When these mechanisms are engaged, the electrophysiological marker associated with rule-learning appears very quickly, during the earliest phases of exposition to a new language.  相似文献   

8.
The hedonic meaning of words affects word recognition, as shown by behavioral, functional imaging, and event-related potential (ERP) studies. However, the spatiotemporal dynamics and cognitive functions behind are elusive, partly due to methodological limitations of previous studies. Here, we account for these difficulties by computing combined electro-magnetoencephalographic (EEG/MEG) source localization techniques. Participants covertly read emotionally high-arousing positive and negative nouns, while EEG and MEG were recorded simultaneously. Combined EEG/MEG current-density reconstructions for the P1 (80–120 ms), P2 (150–190 ms) and EPN component (200–300 ms) were computed using realistic individual head models, with a cortical constraint. Relative to negative words, the P1 to positive words predominantly involved language-related structures (left middle temporal and inferior frontal regions), and posterior structures related to directed attention (occipital and parietal regions). Effects shifted to the right hemisphere in the P2 component. By contrast, negative words received more activation in the P1 time-range only, recruiting prefrontal regions, including the anterior cingulate cortex (ACC). Effects in the EPN were not statistically significant. These findings show that different neuronal networks are active when positive versus negative words are processed. We account for these effects in terms of an “emotional tagging” of word forms during language acquisition. These tags then give rise to different processing strategies, including enhanced lexical processing of positive words and a very fast language-independent alert response to negative words. The valence-specific recruitment of different networks might underlie fast adaptive responses to both approach- and withdrawal-related stimuli, be they acquired or biological.  相似文献   

9.
The dual-route model of speech processing includes a dorsal stream that maps auditory to motor features at the sublexical level rather than at the lexico-semantic level. However, the literature on gesture is an invitation to revise this model because it suggests that the premotor cortex of the dorsal route is a major site of lexico-semantic interaction. Here we investigated lexico-semantic mapping using word-gesture pairs that were either congruent or incongruent. Using fMRI-adaptation in 28 subjects, we found that temporo-parietal and premotor activity during auditory processing of single action words was modulated by the prior audiovisual context in which the words had been repeated. The BOLD signal was suppressed following repetition of the auditory word alone, and further suppressed following repetition of the word accompanied by a congruent gesture (e.g. [“grasp” + grasping gesture]). Conversely, repetition suppression was not observed when the same action word was accompanied by an incongruent gesture (e.g. [“grasp” + sprinkle]). We propose a simple model to explain these results: auditory and visual information converge onto premotor cortex where it is represented in a comparable format to determine (in)congruence between speech and gesture. This ability of the dorsal route to detect audiovisual semantic (in)congruence suggests that its function is not restricted to the sublexical level.  相似文献   

10.
This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword—transposable nonword—was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed.  相似文献   

11.
Buchsbaum BR  Olsen RK  Koch P  Berman KF 《Neuron》2005,48(4):687-697
To hear a sequence of words and repeat them requires sensory-motor processing and something more-temporary storage. We investigated neural mechanisms of verbal memory by using fMRI and a task designed to tease apart perceptually based ("echoic") memory from phonological-articulatory memory. Sets of two- or three-word pairs were presented bimodally, followed by a cue indicating from which modality (auditory or visual) items were to be retrieved and rehearsed over a delay. Although delay-period activation in the planum temporale (PT) was insensible to the source modality and showed sustained delay-period activity, the superior temporal gyrus (STG) activated more vigorously when the retrieved items had arrived to the auditory modality and showed transient delay-period activity. Functional connectivity analysis revealed two topographically distinct fronto-temporal circuits, with STG co-activating more strongly with ventrolateral prefrontal cortex and PT co-activating more strongly with dorsolateral prefrontal cortex. These argue for separate contributions of ventral and dorsal auditory streams in verbal working memory.  相似文献   

12.
Current theoretical positions assume that action-related word meanings are established by functional connections between perisylvian language areas and the motor cortex (MC) according to Hebb's associative learning principle. To test this assumption, we probed the functional relevance of the left MC for learning of a novel action word vocabulary by disturbing neural plasticity in the MC with transcranial direct current stimulation (tDCS). In combination with tDCS, subjects learned a novel vocabulary of 76 concrete, body-related actions by means of an associative learning paradigm. Compared with a control condition with "sham" stimulation, cathodal tDCS reduced success rates in vocabulary acquisition, as shown by tests of novel action word translation into the native language. The analysis of learning behavior revealed a specific effect of cathodal tDCS on the ability to associatively couple actions with novel words. In contrast, we did not find these effects in control experiments, when tDCS was applied to the prefrontal cortex or when subjects learned object-related words. The present study lends direct evidence to the proposition that the left MC is causally involved in the acquisition of novel action-related words.  相似文献   

13.
Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain.  相似文献   

14.
The quality of handwriting is evaluated from the visual inspection of its legibility and not from the movement that generates the trace. Although handwriting is achieved in silence, adding sounds to handwriting movement might help towards its perception, provided that these sounds are meaningful. This study evaluated the ability to judge handwriting quality from the auditory perception of the underlying sonified movement, without seeing the written trace. In a first experiment, samples of a word written by children with dysgraphia, proficient children writers, and proficient adult writers were collected with a graphic tablet. Then, the pen velocity, the fluency, and the axial pen pressure were sonified in order to create forty-five audio files. In a second experiment, these files were presented to 48 adult listeners who had to mark the underlying unseen handwriting. In order to evaluate the relevance of the sonification strategy, two experimental conditions were compared. In a first ‘implicit’ condition, the listeners made their judgment without any knowledge of the mapping between the sounds and the handwriting variables. In a second ‘explicit’ condition, they knew what the sonified variables corresponded to and the evaluation criteria. Results showed that, under the implicit condition, two thirds of the listeners marked the three groups of writers differently. In the explicit condition, all listeners marked the dysgraphic handwriting lower than that of the two other groups. In a third experiment, the scores given from the auditory evaluation were compared to the scores given by 16 other adults from the visual evaluation of the trace. Results revealed that auditory evaluation was more relevant than the visual evaluation for evaluating a dysgraphic handwriting. Handwriting sonification might therefore be a relevant tool allowing a therapist to complete the visual assessment of the written trace by an auditory control of the handwriting movement quality.  相似文献   

15.
Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral) was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words)  相似文献   

16.
Higher N170 amplitudes to words and to faces were recently reported for faster readers of German. Since the shallow German orthography allows phonological recoding of single letters, the reported speed advantages might have their origin in especially well-developed visual processing skills of faster readers. In contrast to German, adult readers of Hebrew are forced to process letter chunks up to whole words. This dependence on more complex visual processing might have created ceiling effects for this skill. Therefore, the current study examined whether also in the deep Hebrew orthography visual processing skills as reflected by N170 amplitudes explain reading speed differences. Forty university students, native speakers of Hebrew without reading impairments, accomplished a lexical decision task (i.e., deciding whether a visually presented stimulus represents a real or a pseudo word) and a face decision task (i.e., deciding whether a face was presented complete or with missing facial features) while their electroencephalogram was recorded from 64 scalp positions. In both tasks stronger event related potentials (ERPs) were observed for faster readers in time windows at about 200 ms. Unlike in previous studies, ERP waveforms in relevant time windows did not correspond to N170 scalp topographies. The results support the notion of visual processing ability as an orthography independent marker of reading proficiency, which advances our understanding about regular and impaired reading development.  相似文献   

17.
Despite the clear importance of language in our life, our vital ability to quickly and effectively learn new words and meanings is neurobiologically poorly understood. Conventional knowledge maintains that language learning—especially in adulthood—is slow and laborious. Furthermore, its structural basis remains unclear. Even though behavioural manifestations of learning are evident near instantly, previous neuroimaging work across a range of semantic categories has largely studied neural changes associated with months or years of practice. Here, we address rapid neuroanatomical plasticity accompanying new lexicon acquisition, specifically focussing on the learning of action-related language, which has been linked to the brain’s motor systems. Our results show that it is possible to measure and to externally modulate (using transcranial magnetic stimulation (TMS) of motor cortex) cortical microanatomic reorganisation after mere minutes of new word learning. Learning-induced microstructural changes, as measured by diffusion kurtosis imaging (DKI) and machine learning-based analysis, were evident in prefrontal, temporal, and parietal neocortical sites, likely reflecting integrative lexico-semantic processing and formation of new memory circuits immediately during the learning tasks. These results suggest a structural basis for the rapid neocortical word encoding mechanism and reveal the causally interactive relationship of modal and associative brain regions in supporting learning and word acquisition.

This combined neuroimaging and brain stimulation study reveals rapid and distributed microstructural plasticity after a single immersive language learning session, demonstrating the causal relevance of the motor cortex in encoding the meaning of novel action words.  相似文献   

18.
Using functional magnetic resonance imaging during a primed visual lexical decision task, we investigated the neural and functional mechanisms underlying modulations of semantic word processing through hypnotic suggestions aimed at altering lexical processing of primes. The priming task was to discriminate between target words and pseudowords presented 200 ms after the prime word which was semantically related or unrelated to the target. In a counterbalanced study design, each participant performed the task once at normal wakefulness and once after the administration of hypnotic suggestions to perceive the prime as a meaningless symbol of a foreign language. Neural correlates of priming were defined as significantly lower activations upon semantically related compared to unrelated trials. We found significant suggestive treatment-induced reductions in neural priming, albeit irrespective of the degree of suggestibility. Neural priming was attenuated upon suggestive treatment compared with normal wakefulness in brain regions supporting automatic (fusiform gyrus) and controlled semantic processing (superior and middle temporal gyri, pre- and postcentral gyri, and supplementary motor area). Hence, suggestions reduced semantic word processing by conjointly dampening both automatic and strategic semantic processes.  相似文献   

19.
Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field.  相似文献   

20.

Background

Word frequency is the most important variable in language research. However, despite the growing interest in the Chinese language, there are only a few sources of word frequency measures available to researchers, and the quality is less than what researchers in other languages are used to.

Methodology

Following recent work by New, Brysbaert, and colleagues in English, French and Dutch, we assembled a database of word and character frequencies based on a corpus of film and television subtitles (46.8 million characters, 33.5 million words). In line with what has been found in the other languages, the new word and character frequencies explain significantly more of the variance in Chinese word naming and lexical decision performance than measures based on written texts.

Conclusions

Our results confirm that word frequencies based on subtitles are a good estimate of daily language exposure and capture much of the variance in word processing efficiency. In addition, our database is the first to include information about the contextual diversity of the words and to provide good frequency estimates for multi-character words and the different syntactic roles in which the words are used. The word frequencies are freely available for research purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号