首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
During sentence production, linguistic information (semantics, syntax, phonology) of words is retrieved and assembled into a meaningful utterance. There is still debate on how we assemble single words into more complex syntactic structures such as noun phrases or sentences. In the present study, event-related potentials (ERPs) were used to investigate the time course of syntactic planning. Thirty-three volunteers described visually animated scenes using naming formats varying in syntactic complexity: from simple words (‘W’, e.g., “triangle”, “red”, “square”, “green”, “to fly towards”), to noun phrases (‘NP’, e.g., “the red triangle”, “the green square”, “to fly towards”), to a sentence (‘S’, e.g., “The red triangle flies towards the green square.”). Behaviourally, we observed an increase in errors and corrections with increasing syntactic complexity, indicating a successful experimental manipulation. In the ERPs following scene onset, syntactic complexity variations were found in a P300-like component (‘S’/‘NP’>‘W’) and a fronto-central negativity (linear increase with syntactic complexity). In addition, the scene could display two actions - unpredictable for the participant, as the disambiguation occurred only later in the animation. Time-locked to the moment of visual disambiguation of the action and thus the verb, we observed another P300 component (‘S’>‘NP’/‘W’). The data show for the first time evidence of sensitivity to syntactic planning within the P300 time window, time-locked to visual events critical of syntactic planning. We discuss the findings in the light of current syntactic planning views.  相似文献   

2.
Patients with classic galactosemia, an inborn error of metabolism, have speech and language production impairments. Past research primarily focused on speech (motor) problems, but these cannot solely explain the language impairments. Which specific deficits contribute to the impairments in language production is not yet known. Deficits in semantic and syntactic planning are plausible and require further investigation. In the present study, we examined syntactic encoding while patients and matched controls overtly described scenes of moving objects using either separate words (minimal syntactic planning) or sentences (sentence-level syntactic planning). The design of the paradigm also allowed tapping into local noun phrase- and more global sentence-level syntactic planning. Simultaneously, we recorded event-related potentials (ERPs). The patients needed more time to prepare and finish the utterances and made more errors. The patient ERPs had a very similar morphology to that of healthy controls, indicating overall comparable neural processing. Most importantly, the ERPs diverged from those of controls in several functionally informative time windows, ranging from very early (90–150 ms post scene onset) to relatively late (1820–2020 ms post scene onset). These time windows can be associated with different linguistic encoding stages. The ERP results form the first neuroscientific evidence for language production impairments in patients with galactosemia in lexical and syntactic planning stages, i.e., prior to the linguistic output phase. These findings hence shed new light on the language impairments in this disease.  相似文献   

3.
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects'' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance''s difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production.  相似文献   

4.
Sentence comprehension involves timely computing different types of relations between its verbs and noun arguments, such as morphosyntactic, semantic, and thematic relations. Here, we used EEG technique to investigate the potential differences in thematic role computing and lexical-semantic relatedness processing during on-line sentence comprehension, and the interaction between these two types of processes. Mandarin Chinese sentences were used as materials. The basic structure of those sentences is “Noun+Verb+‘le’+a two-character word”, with the Noun being the initial argument. The verb disambiguates the initial argument as an agent or a patient. Meanwhile, the initial argument and the verb are highly or lowly semantically related. The ERPs at the verbs revealed that: relative to the agent condition, the patient condition evoked a larger N400 only when the argument and verb were lowly semantically related; however, relative to the high-relatedness condition, the low-relatedness condition elicited a larger N400 regardless of the thematic relation; although both thematic role variation and semantic relatedness variation elicited N400 effects, the N400 effect elicited by the former was broadly distributed and reached maximum over the frontal electrodes, and the N400 effect elicited by the latter had a posterior distribution. In addition, the brain oscillations results showed that, although thematic role variation (patient vs. agent) induced power decreases around the beta frequency band (15–30 Hz), semantic relatedness variation (low-relatedness vs. high-relatedness) induced power increases in the theta frequency band (4–7 Hz). These results suggested that, in the sentence context, thematic role computing is modulated by the semantic relatedness between the verb and its argument; semantic relatedness processing, however, is in some degree independent from the thematic relations. Moreover, our results indicated that, during on-line sentence comprehension, thematic role computing and semantic relatedness processing are mediated by distinct neural systems.  相似文献   

5.

Background

One of the most debated issues in the cognitive neuroscience of language is whether distinct semantic domains are differentially represented in the brain. Clinical studies described several anomic dissociations with no clear neuroanatomical correlate. Neuroimaging studies have shown that memory retrieval is more demanding for proper than common nouns in that the former are purely arbitrary referential expressions. In this study a semantic relatedness paradigm was devised to investigate neural processing of proper and common nouns.

Methodology/Principal Findings

780 words (arranged in pairs of Italian nouns/adjectives and the first/last names of well known persons) were presented. Half pairs were semantically related (“Woody Allen” or “social security”), while the others were not (“Sigmund Parodi” or “judicial cream”). All items were balanced for length, frequency, familiarity and semantic relatedness. Participants were to decide about the semantic relatedness of the two items in a pair. RTs and N400 data suggest that the task was more demanding for common nouns. The LORETA neural generators for the related-unrelated contrast (for proper names) included the left fusiform gyrus, right medial temporal gyrus, limbic and parahippocampal regions, inferior parietal and inferior frontal areas, which are thought to be involved in the conjoined processing a familiar face with the relevant episodic information. Person name was more emotional and sensory vivid than common noun semantic access.

Conclusions/Significance

When memory retrieval is not required, proper name access (conspecifics knowledge) is not more demanding. The neural generators of N400 to unrelated items (unknown persons and things) did not differ as a function of lexical class, thus suggesting that proper and common nouns are not treated differently as belonging to different grammatical classes.  相似文献   

6.
Research on language comprehension using event-related potentials (ERPs) reported distinct ERP components reliably related to the processing of semantic (N400) and syntactic information (P600). Recent ERP studies have challenged this well-defined distinction by showing P600 effects for semantic and pragmatic anomalies. So far, it is still unresolved whether the P600 reflects specific or rather common processes. The present study addresses this question by investigating ERPs in response to a syntactic and pragmatic (irony) manipulation, as well as a combined syntactic and pragmatic manipulation. For the syntactic condition, a morphosyntactic violation was applied, whereas for the pragmatic condition, such as “That is rich”, either an ironic or literal interpretation was achieved, depending on the prior context. The ERPs at the critical word showed a LAN-P600 pattern for syntactically incorrect sentences relative to correct ones. For ironic compared to literal sentences, ERPs showed a P200 effect followed by a P600 component. In comparison of the syntax-related P600 to the irony-related P600, distributional differences were found. Moreover, for the P600 time window (i.e., 500–900 ms), different changes in theta power between the syntax and pragmatics effects were found, suggesting that different patterns of neural activity contributed to each respective effect. Thus, both late positivities seem to be differently sensitive to these two types of linguistic information, and might reflect distinct neurocognitive processes, such as reanalysis of the sentence structure versus pragmatic reanalysis.  相似文献   

7.
Much of what is known about word recognition in toddlers comes from eyetracking studies. Here we show that the speed and facility with which children recognize words, as revealed in such studies, cannot be attributed to a task-specific, closed-set strategy; rather, children’s gaze to referents of spoken nouns reflects successful search of the lexicon. Toddlers’ spoken word comprehension was examined in the context of pictures that had two possible names (such as a cup of juice which could be called “cup” or “juice”) and pictures that had only one likely name for toddlers (such as “apple”), using a visual world eye-tracking task and a picture-labeling task (n = 77, mean age, 21 months). Toddlers were just as fast and accurate in fixating named pictures with two likely names as pictures with one. If toddlers do name pictures to themselves, the name provides no apparent benefit in word recognition, because there is no cost to understanding an alternative lexical construal of the picture. In toddlers, as in adults, spoken words rapidly evoke their referents.  相似文献   

8.
To determine when and how L2 learners start to process L2 words affectively and semantically, we conducted a longitudinal study on their interaction in adult L2 learners. In four test sessions, spanning half a year of L2 learning, we monitored behavioral and ERP learning-related changes for one and the same set of words by means of a primed lexical-decision paradigm with L1 primes and L2 targets. Sensitivity rates, accuracy rates, RTs, and N400 amplitude to L2 words and pseudowords improved significantly across sessions. A semantic priming effect (e.g, prime “driver”facilitating response to target “street”) was found in accuracy rates and RTs when collapsing Sessions 1 to 4, while this effect modulated ERP amplitudes within the first 300 ms of L2 target processing. An overall affective priming effect (e.g., “sweet” facilitating”taste”) was also found in RTs and ERPs (posterior P1). Importantly, the ERPs showed an L2 valence effect across sessions (e.g., positive words were easier to process than neutral words), indicating that L2 learners were sensitive to L2 affective meaning. Semantic and affective priming interacted in the N400 time-window only in Session 4, implying that they affected meaning integration during L2 immersion together. The results suggest that L1 and L2 are initially processed semantically and affectively via relatively separate channels that are more and more linked contingent on L2 exposure.  相似文献   

9.
This paper shows that in Porphyridium cruentum and in Chlorella pyrenoidosa (but apparently not in Anacystis nidulans) “extreme red” light (> 720 mμ) can inhibit photosynthesis produced by “far red” light (up to 720 mμ). From the action spectrum of this phenomenon, it appears that an unknown pigment with an absorption band around 745 mμ must be responsible for it.  相似文献   

10.
It has previously been shown that language production, performed simultaneously with a nonlinguistic task, involves sustained attention. Sustained attention concerns the ability to maintain alertness over time. Here, we aimed to replicate the previous finding by showing that individuals call upon sustained attention when they plan single noun phrases (e.g., "the carrot") and perform a manual arrow categorization task. In addition, we investigated whether speakers also recruit sustained attention when they produce conjoined noun phrases (e.g., "the carrot and the bucket") describing two pictures, that is, when both the first and second task are linguistic. We found that sustained attention correlated with the proportion of abnormally slow phrase-production responses. Individuals with poor sustained attention displayed a greater number of very slow responses than individuals with better sustained attention. Importantly, this relationship was obtained both for the production of single phrases while performing a nonlinguistic manual task, and the production of noun phrase conjunctions in referring to two spatially separated objects. Inhibition and updating abilities were also measured. These scores did not correlate with our measure of sustained attention, suggesting that sustained attention and executive control are distinct. Overall, the results suggest that planning conjoined noun phrases involves sustained attention, and that language production happens less automatically than has often been assumed.  相似文献   

11.
12.
Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.

Little is known about the neural processes involved in planning to speak. This study uses eye-tracking and EEG to show that speakers prepare sentence structures in different ways and rely on alpha and theta oscillations differently when planning sentences with and without agent case marking, challenging theories on how production and comprehension affect language evolution.  相似文献   

13.
An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call “preparatory gestures”. However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call “comodulatory gestures” providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.  相似文献   

14.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.  相似文献   

15.
Learning disabilities (LDs) are the most common psychiatric disorders in children. LDs are classified either as “Specific” or “Learning Disorder Not Otherwise Specified”. An important hypothesis suggests a failure in general domain process (i.e., attention) that explains global academic deficiencies. The aim of this study was to evaluate event-related potential (ERP) patterns of LD Not Otherwise Specified children with respect to a control group. Forty-one children (8−10.6 years old) participated and performed a semantic judgment priming task while ERPs were recorded. Twenty-one LD children had significantly lower scores in all academic skills (reading, writing and arithmetic) than twenty controls. Different ERP patterns were observed for each group. Control group showed smaller amplitudes of an anterior P200 for unrelated than related word pairs. This P200 effect was followed by a significant early N400a effect (greater amplitudes for unrelated than related word pairs; 350–550 ms) with a right topographical distribution. By contrast, LD Not Otherwise Specified group did not show a P200 effect or a significant N400a effect. This evidence suggests that LD Not Otherwise Specified children might be deficient in reading, writing and arithmetic domains because of their sluggish shifting of attention to process the incoming information.  相似文献   

16.
Approaching possession as a phenomenon of the morphology–semantics interface, this paper combines two major perspectives, namely the typological and the semantic perspective. It offers a comprehensive approach that bears on the contrast of semantic and pragmatic possession. After portraying the most essential morphosyntactic strategies of split possession, the lexical distinction of nominal concept types and the resulting representation of non-relational and relational nouns is presented, following the theory of Löbner (2011). This allows to explain the use of a certain construction for a given noun by the mapping of lexical semantics to morphosyntactic realisation.Semantic possession is understood as involving a relation that is inherent to the meaning of the possessed noun, in the sense that the referent of the noun can only be established if the possessor is specified. In contrast, pragmatic possession implies that the relation POSS is established by the context rather than by the lexical semantics. This opposition enables a fresh look at the morphology of nominal possession under which the notion of (in)alienability is reinterpreted. The morphology of alienable constructions is analysed as establishing pragmatic possession by denoting an operation that shifts the head noun from a sortal to a relational concept. Thus, the innovation of the approach lies in its typologically justified radically compositional nature. It is shown in detail how this general methodical strategy accounts for all essential morphosyntactic oppositions known from the typology of nominal possession.  相似文献   

17.
A number of studies have explored the time course of Chinese semantic and syntactic processing. However, whether syntactic processing occurs earlier than semantics during Chinese sentence reading is still under debate. To further explore this issue, an event-related potentials (ERPs) experiment was conducted on 21 native Chinese speakers who read individually-presented Chinese simple sentences (NP1+VP+NP2) word-by-word for comprehension and made semantic plausibility judgments. The transitivity of the verbs was manipulated to form three types of stimuli: congruent sentences (CON), sentences with a semantically violated NP2 following a transitive verb (semantic violation, SEM), and sentences with a semantically violated NP2 following an intransitive verb (combined semantic and syntactic violation, SEM+SYN). The ERPs evoked from the target NP2 were analyzed by using the Residue Iteration Decomposition (RIDE) method to reconstruct the ERP waveform blurred by trial-to-trial variability, as well as by using the conventional ERP method based on stimulus-locked averaging. The conventional ERP analysis showed that, compared with the critical words in CON, those in SEM and SEM+SYN elicited an N400–P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM+SYN was bigger than that in SEM. Compared with the conventional ERP analysis, RIDE analysis revealed a larger N400 effect and an earlier P600 effect (in the time window of 500–800 ms instead of 570–810ms). Overall, the combination of conventional ERP analysis and the RIDE method for compensating for trial-to-trial variability confirmed the non-significant difference between SEM and SEM+SYN in the earlier N400 time window. Converging with previous findings on other Chinese structures, the current study provides further precise evidence that syntactic processing in Chinese does not occur earlier than semantic processing.  相似文献   

18.
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.  相似文献   

19.
The notion that linguistic forms and meanings are related only by convention and not by any direct relationship between sounds and semantic concepts is a foundational principle of modern linguistics. Though the principle generally holds across the lexicon, systematic exceptions have been identified. These “sound symbolic” forms have been identified in lexical items and linguistic processes in many individual languages. This paper examines sound symbolism in the languages of Australia. We conduct a statistical investigation of the evidence for several common patterns of sound symbolism, using data from a sample of 120 languages. The patterns examined here include the association of meanings denoting “smallness” or “nearness” with front vowels or palatal consonants, and the association of meanings denoting “largeness” or “distance” with back vowels or velar consonants. Our results provide evidence for the expected associations of vowels and consonants with meanings of “smallness” and “proximity” in Australian languages. However, the patterns uncovered in this region are more complicated than predicted. Several sound-meaning relationships are only significant for segments in prominent positions in the word, and the prevailing mapping between vowel quality and magnitude meaning cannot be characterized by a simple link between gradients of magnitude and vowel F2, contrary to the claims of previous studies.  相似文献   

20.
The present study uses the N400 component of event-related potentials (ERPs) as a processing marker of single spoken words presented during sleep. Thirteen healthy volunteers participated in the study. The auditory ERPs were registered in response to a semantic priming paradigm made up of pairs of words (50% related, 50% unrelated) presented in the waking state and during sleep stages II, III–IV and REM. The amplitude, latency and scalp distribution parameters of the negativity observed during stage II and the REM stage were contrasted with the results obtained in the waking state. The `N400-like' effect elicited in these stages of sleep showed a mean amplitude for pairs of unrelated words significantly greater than for related pairs and an increment of latency. These results suggest that during these sleep stages a semantic priming effect is maintained actively although the lexical processing time increases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号