首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

The existence and function of unilateral hemispheric projections within foveal vision may substantially affect foveal word recognition. The purpose of this research was to reveal these projections and determine their functionality.

Methodology

Single words (and pseudowords) were presented to the left or right of fixation, entirely within either foveal or extrafoveal vision. To maximize the likelihood of unilateral projections for foveal displays, stimuli in foveal vision were presented away from the midline. The processing of stimuli in each location was assessed by combining behavioural measures (reaction times, accuracy) with on-line monitoring of hemispheric activity using event-related potentials recorded over each hemisphere, and carefully-controlled presentation procedures using an eye-tracker linked to a fixation-contingent display.

Principal Findings

Event-related potentials 100–150 ms and 150–200 ms after stimulus onset indicated that stimuli in extrafoveal and foveal locations were projected unilaterally to the hemisphere contralateral to the presentation hemifield with no concurrent projection to the ipsilateral hemisphere. These effects were similar for words and pseudowords, suggesting this early division occurred before word recognition. Indeed, event-related potentials revealed differences between words and pseudowords 300–350 ms after stimulus onset, for foveal and extrafoveal locations, indicating that word recognition had now occurred. However, these later event-related potentials also revealed that the hemispheric division observed previously was no longer present for foveal locations but remained for extrafoveal locations. These findings closely matched the behavioural finding that foveal locations produced similar performance each side of fixation but extrafoveal locations produced left-right asymmetries.

Conclusions

These findings indicate that an initial division in unilateral hemispheric projections occurs in foveal vision away from the midline but is not apparent, or functional, when foveal word recognition actually occurs. In contrast, the division in unilateral hemispheric projections that occurs in extrafoveal locations is still apparent, and is functional, when extrafoveal word recognition takes place.  相似文献   

2.

Objectives

Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern.

Experimental design

Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition.

Principal findings

The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca''s area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects.

Conclusions

Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.  相似文献   

3.

Background

The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words.

Methodology

Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters.

Principal Findings

We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities.

Conclusions

The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus.  相似文献   

4.

Background

Word frequency is the most important variable in language research. However, despite the growing interest in the Chinese language, there are only a few sources of word frequency measures available to researchers, and the quality is less than what researchers in other languages are used to.

Methodology

Following recent work by New, Brysbaert, and colleagues in English, French and Dutch, we assembled a database of word and character frequencies based on a corpus of film and television subtitles (46.8 million characters, 33.5 million words). In line with what has been found in the other languages, the new word and character frequencies explain significantly more of the variance in Chinese word naming and lexical decision performance than measures based on written texts.

Conclusions

Our results confirm that word frequencies based on subtitles are a good estimate of daily language exposure and capture much of the variance in word processing efficiency. In addition, our database is the first to include information about the contextual diversity of the words and to provide good frequency estimates for multi-character words and the different syntactic roles in which the words are used. The word frequencies are freely available for research purposes.  相似文献   

5.

Background

Several recent studies have revealed that words presented with a small increase in interletter spacing are identified faster than words presented with the default interletter spacing (i.e., w a t e r faster than water). Modeling work has shown that this advantage occurs at an early encoding level. Given the implications of this finding for the ease of reading in the new digital era, here we examined whether the beneficial effect of small increases in interletter spacing can be generalized to a normal reading situation.

Methodology

We conducted an experiment in which the participant’s eyes were monitored when reading sentences varying in interletter spacing: i) sentences were presented with the default (0.0) interletter spacing; ii) sentences presented with a +1.0 interletter spacing; and iii) sentences presented with a +1.5 interletter spacing.

Principal Findings

Results showed shorter fixation duration times as an inverse function of interletter spacing (i.e., fixation durations were briefest with +1.5 spacing and slowest with the default spacing).

Conclusions

Subtle increases in interletter spacing facilitate the encoding of the fixated word during normal reading. Thus, interletter spacing is a parameter that may affect the ease of reading, and it could be adjustable in future implementations of e-book readers.  相似文献   

6.

Background

T. J. Crow suggested that the genetic variance associated with the evolution in Homo sapiens of hemispheric dominance for language carries with it the hazard of the symptoms of schizophrenia. Individuals lacking the typical left hemisphere advantage for language, in particular for phonological components, would be at increased risk of the typical symptoms such as auditory hallucinations and delusions.

Methodology/Principal Findings

Twelve schizophrenic patients treated with low levels of neuroleptics and twelve matched healthy controls participated in an event-related potential experiment. Subjects matched word-pairs in three tasks: rhyming/phonological, semantic judgment and word recognition. Slow evoked potentials were recorded from 26 scalp electrodes, and a laterality index was computed for anterior and posterior regions during the inter stimulus interval. During phonological processing individuals with schizophrenia failed to achieve the left hemispheric dominance consistently observed in healthy controls. The effect involved anterior (fronto-temporal) brain regions and was specific for the Phonological task; group differences were small or absent when subjects processed the same stimulus material in a Semantic task or during Word Recognition, i.e. during tasks that typically activate more widespread areas in both hemispheres.

Conclusions/Significance

We show for the first time how the deficit of lateralization in the schizophrenic brain is specific for the phonological component of language. This loss of hemispheric dominance would explain typical symptoms, e.g. when an individual''s own thoughts are perceived as an external intruding voice. The change can be interpreted as a consequence of “hemispheric indecision”, a failure to segregate phonological engrams in one hemisphere.  相似文献   

7.

Background

It is well established that the left inferior frontal gyrus plays a key role in the cerebral cortical network that supports reading and visual word recognition. Less clear is when in time this contribution begins. We used magnetoencephalography (MEG), which has both good spatial and excellent temporal resolution, to address this question.

Methodology/Principal Findings

MEG data were recorded during a passive viewing paradigm, chosen to emphasize the stimulus-driven component of the cortical response, in which right-handed participants were presented words, consonant strings, and unfamiliar faces to central vision. Time-frequency analyses showed a left-lateralized inferior frontal gyrus (pars opercularis) response to words between 100–250 ms in the beta frequency band that was significantly stronger than the response to consonant strings or faces. The left inferior frontal gyrus response to words peaked at ∼130 ms. This response was significantly later in time than the left middle occipital gyrus, which peaked at ∼115 ms, but not significantly different from the peak response in the left mid fusiform gyrus, which peaked at ∼140 ms, at a location coincident with the fMRI–defined visual word form area (VWFA). Significant responses were also detected to words in other parts of the reading network, including the anterior middle temporal gyrus, the left posterior middle temporal gyrus, the angular and supramarginal gyri, and the left superior temporal gyrus.

Conclusions/Significance

These findings suggest very early interactions between the vision and language domains during visual word recognition, with speech motor areas being activated at the same time as the orthographic word-form is being resolved within the fusiform gyrus. This challenges the conventional view of a temporally serial processing sequence for visual word recognition in which letter forms are initially decoded, interact with their phonological and semantic representations, and only then gain access to a speech code.  相似文献   

8.

Background

When two targets are presented in close temporal proximity amongst a rapid serial visual stream of distractors, a period of disrupted attention and attenuated awareness lasting 200–500 ms follows identification of the first target (T1). This phenomenon is known as the “attentional blink” (AB) and is generally attributed to a failure to consolidate information in visual short-term memory due to depleted or disrupted attentional resources. Previous research has shown that items presented during the AB that fail to reach conscious awareness are still processed to relatively high levels, including the level of meaning. For example, missed word stimuli have been shown to prime later targets that are closely associated words. Although these findings have been interpreted as evidence for semantic processing during the AB, closely associated words (e.g., day-night) may also rely on specific, well-worn, lexical associative links which enhance attention to the relevant target.

Methodology/Principal Findings

We used a measure of semantic distance to create prime-target pairs that are conceptually close, but have low word associations (e.g., wagon and van) and investigated priming from a distractor stimulus presented during the AB to a subsequent target (T2). The stimuli were words (concrete nouns) in Experiment 1 and the corresponding pictures of objects in Experiment 2. In both experiments, report of T2 was facilitated when this item was preceded by a semantically-related distractor.

Conclusions/Significance

This study is the first to show conclusively that conceptual information is extracted from distractor stimuli presented during a period of attenuated awareness and that this information spreads to neighbouring concepts within a semantic network.  相似文献   

9.

Background

Studies demonstrating the involvement of motor brain structures in language processing typically focus on time windows beyond the latencies of lexical-semantic access. Consequently, such studies remain inconclusive regarding whether motor brain structures are recruited directly in language processing or through post-linguistic conceptual imagery. In the present study, we introduce a grip-force sensor that allows online measurements of language-induced motor activity during sentence listening. We use this tool to investigate whether language-induced motor activity remains constant or is modulated in negative, as opposed to affirmative, linguistic contexts.

Methodology/Principal Findings

Participants listened to spoken action target words in either affirmative or negative sentences while holding a sensor in a precision grip. The participants were asked to count the sentences containing the name of a country to ensure attention. The grip force signal was recorded continuously. The action words elicited an automatic and significant enhancement of the grip force starting at approximately 300 ms after target word onset in affirmative sentences; however, no comparable grip force modulation was observed when these action words occurred in negative contexts.

Conclusions/Significance

Our findings demonstrate that this simple experimental paradigm can be used to study the online crosstalk between language and the motor systems in an ecological and economical manner. Our data further confirm that the motor brain structures that can be called upon during action word processing are not mandatorily involved; the crosstalk is asymmetrically governed by the linguistic context and not vice versa.  相似文献   

10.
Shapiro AG  Knight EJ  Lu ZL 《PloS one》2011,6(4):e18719

Background

Anatomical and physiological differences between the central and peripheral visual systems are well documented. Recent findings have suggested that vision in the periphery is not just a scaled version of foveal vision, but rather is relatively poor at representing spatial and temporal phase and other visual features. Shapiro, Lu, Huang, Knight, and Ennis (2010) have recently examined a motion stimulus (the “curveball illusion”) in which the shift from foveal to peripheral viewing results in a dramatic spatial/temporal discontinuity. Here, we apply a similar analysis to a range of other spatial/temporal configurations that create perceptual conflict between foveal and peripheral vision.

Methodology/Principal Findings

To elucidate how the differences between foveal and peripheral vision affect super-threshold vision, we created a series of complex visual displays that contain opposing sources of motion information. The displays (referred to as the peripheral escalator illusion, peripheral acceleration and deceleration illusions, rotating reversals illusion, and disappearing squares illusion) create dramatically different perceptions when viewed foveally versus peripherally. We compute the first-order and second-order directional motion energy available in the displays using a three-dimensional Fourier analysis in the (x, y, t) space. The peripheral escalator, acceleration and deceleration illusions and rotating reversals illusion all show a similar trend: in the fovea, the first-order motion energy and second-order motion energy can be perceptually separated from each other; in the periphery, the perception seems to correspond to a combination of the multiple sources of motion information. The disappearing squares illusion shows that the ability to assemble the features of Kanisza squares becomes slower in the periphery.

Conclusions/Significance

The results lead us to hypothesize “feature blur” in the periphery (i.e., the peripheral visual system combines features that the foveal visual system can separate). Feature blur is of general importance because humans are frequently bringing the information in the periphery to the fovea and vice versa.  相似文献   

11.

Background

The capacity to memorize speech sounds is crucial for language acquisition. Newborn human infants can discriminate phonetic contrasts and extract rhythm, prosodic information, and simple regularities from speech. Yet, there is scarce evidence that infants can recognize common words from the surrounding language before four months of age.

Methodology/Principal Findings

We studied one hundred and twelve 1-5 day-old infants, using functional near-infrared spectroscopy (fNIRS). We found that newborns tested with a novel bisyllabic word show greater hemodynamic brain response than newborns tested with a familiar bisyllabic word. We showed that newborns recognize the familiar word after two minutes of silence or after hearing music, but not after hearing a different word.

Conclusions/Significance

The data show that retroactive interference is an important cause of forgetting in the early stages of language acquisition. Moreover, because neonates forget words in the presence of some –but not all– sounds, the results indicate that the interference phenomenon that causes forgetting is selective.  相似文献   

12.

Background

Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials.

Methodology

Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets.

Conclusions

Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required.  相似文献   

13.

Background

Previous research has shown that object recognition may develop well into late childhood and adolescence. The present study extends that research and reveals novel differences in holistic and analytic recognition performance in 7–12 year olds compared to that seen in adults. We interpret our data within a hybrid model of object recognition that proposes two parallel routes for recognition (analytic vs. holistic) modulated by attention.

Methodology/Principal Findings

Using a repetition-priming paradigm, we found in Experiment 1 that children showed no holistic priming, but only analytic priming. Given that holistic priming might be thought to be more ‘primitive’, we confirmed in Experiment 2 that our surprising finding was not because children’s analytic recognition was merely a result of name repetition.

Conclusions/Significance

Our results suggest a developmental primacy of analytic object recognition. By contrast, holistic object recognition skills appear to emerge with a much more protracted trajectory extending into late adolescence.  相似文献   

14.

Background

Alexithymia, or “no words for feelings”, is a personality trait which is associated with difficulties in emotion recognition and regulation. It is unknown whether this deficit is due primarily to regulation, perception, or mentalizing of emotions. In order to shed light on the core deficit, we tested our subjects on a wide range of emotional tasks. We expected the high alexithymics to underperform on all tasks.

Method

Two groups of healthy individuals, high and low scoring on the cognitive component of the Bermond-Vorst Alexithymia Questionnaire, completed questionnaires of emotion regulation and performed several emotion processing tasks including a micro expression recognition task, recognition of emotional prosody and semantics in spoken sentences, an emotional and identity learning task and a conflicting beliefs and emotions task (emotional mentalizing).

Results

The two groups differed on the Emotion Regulation Questionnaire, Berkeley Expressivity Questionnaire and Empathy Quotient. Specifically, the Emotion Regulation Quotient showed that alexithymic individuals used more suppressive and less reappraisal strategies. On the behavioral tasks, as expected, alexithymics performed worse on recognition of micro expressions and emotional mentalizing. Surprisingly, groups did not differ on tasks of emotional semantics and prosody and associative emotional-learning.

Conclusion

Individuals scoring high on the cognitive component of alexithymia are more prone to suppressive emotion regulation strategies rather than reappraisal strategies. Regarding emotional information processing, alexithymia is associated with reduced performance on measures of early processing as well as higher order mentalizing. However, difficulties in the processing of emotional language were not a core deficit in our alexithymic group.  相似文献   

15.
Shapiro A  Lu ZL  Huang CB  Knight E  Ennis R 《PloS one》2010,5(10):e13296

Background

The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity.

Methodology/Principal Findings

The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk''s vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations.

Conclusions/Significance

The perceived shift of the disk''s direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball''s spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing.  相似文献   

16.

Background

During sentence processing we decode the sequential combination of words, phrases or sentences according to previously learned rules. The computational mechanisms and neural correlates of these rules are still much debated. Other key issue is whether sentence processing solely relies on language-specific mechanisms or is it also governed by domain-general principles.

Methodology/Principal Findings

In the present study, we investigated the relationship between sentence processing and implicit sequence learning in a dual-task paradigm in which the primary task was a non-linguistic task (Alternating Serial Reaction Time Task for measuring probabilistic implicit sequence learning), while the secondary task were a sentence comprehension task relying on syntactic processing. We used two control conditions: a non-linguistic one (math condition) and a linguistic task (word processing task). Here we show that the sentence processing interfered with the probabilistic implicit sequence learning task, while the other two tasks did not produce a similar effect.

Conclusions/Significance

Our findings suggest that operations during sentence processing utilize resources underlying non-domain-specific probabilistic procedural learning. Furthermore, it provides a bridge between two competitive frameworks of language processing. It appears that procedural and statistical models of language are not mutually exclusive, particularly for sentence processing. These results show that the implicit procedural system is engaged in sentence processing, but on a mechanism level, language might still be based on statistical computations.  相似文献   

17.

Background

Zipf''s discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well.

Methodology/Principal Findings

By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage.

Conclusions/Significance

Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf''s law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics.  相似文献   

18.

Background

Adults with bipolar disorder (BD) have cognitive impairments that affect face processing and social cognition. However, it remains unknown whether these deficits in euthymic BD have impaired brain markers of emotional processing.

Methodology/Principal Findings

We recruited twenty six participants, 13 controls subjects with an equal number of euthymic BD participants. We used an event-related potential (ERP) assessment of a dual valence task (DVT), in which faces (angry and happy), words (pleasant and unpleasant), and face-word simultaneous combinations are presented to test the effects of the stimulus type (face vs word) and valence (positive vs. negative). All participants received clinical, neuropsychological and social cognition evaluations. ERP analysis revealed that both groups showed N170 modulation of stimulus type effects (face > word). BD patients exhibited reduced and enhanced N170 to facial and semantic valence, respectively. The neural source estimation of N170 was a posterior section of the fusiform gyrus (FG), including the face fusiform area (FFA). Neural generators of N170 for faces (FG and FFA) were reduced in BD. In these patients, N170 modulation was associated with social cognition (theory of mind).

Conclusions/Significance

This is the first report of euthymic BD exhibiting abnormal N170 emotional discrimination associated with theory of mind impairments.  相似文献   

19.

Background

Auditory laterality is suggested to be characterized by a left hemisphere dominance for the processing of conspecific communication. Nevertheless, there are indications that auditory laterality can also be affected by communicative significance, emotional valence and social recognition.

Methodology/Principal Findings

In order to gain insight into the effects of caller characteristics on auditory laterality in the early primate brain, 17 gray mouse lemurs were tested in a head turn paradigm. The head turn paradigm was established to examine potential functional hemispheric asymmetries on the behavioral level. Subjects were presented with playbacks of two conspecific call types (tsak calls and trill calls) from senders differing in familiarity (unfamiliar vs. familiar) and sex (same sex vs. other sex). Based on the head turn direction towards these calls, evidence was found for a right ear/left hemisphere dominance for the processing of calls of the other sex (Binomial test: p = 0.021, N = 10). Familiarity had no effect on the orientation biases.

Conclusions/Significance

The findings in this study support the growing consensus that auditory laterality is not only determined by the acoustic processing of conspecific communication, but also by other factors like the sex of the sender.  相似文献   

20.

Background

A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.

Methodology/Principal Findings

We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.

Conclusions/Significance

These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号