首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300.  相似文献   

2.
Higher N170 amplitudes to words and to faces were recently reported for faster readers of German. Since the shallow German orthography allows phonological recoding of single letters, the reported speed advantages might have their origin in especially well-developed visual processing skills of faster readers. In contrast to German, adult readers of Hebrew are forced to process letter chunks up to whole words. This dependence on more complex visual processing might have created ceiling effects for this skill. Therefore, the current study examined whether also in the deep Hebrew orthography visual processing skills as reflected by N170 amplitudes explain reading speed differences. Forty university students, native speakers of Hebrew without reading impairments, accomplished a lexical decision task (i.e., deciding whether a visually presented stimulus represents a real or a pseudo word) and a face decision task (i.e., deciding whether a face was presented complete or with missing facial features) while their electroencephalogram was recorded from 64 scalp positions. In both tasks stronger event related potentials (ERPs) were observed for faster readers in time windows at about 200 ms. Unlike in previous studies, ERP waveforms in relevant time windows did not correspond to N170 scalp topographies. The results support the notion of visual processing ability as an orthography independent marker of reading proficiency, which advances our understanding about regular and impaired reading development.  相似文献   

3.
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.  相似文献   

4.
Adding noise to a visual image makes object recognition more effortful and has a widespread effect on human electrophysiological responses. However, visual cortical processes directly involved in handling the stimulus noise have yet to be identified and dissociated from the modulation of the neural responses due to the deteriorated structural information and increased stimulus uncertainty in the case of noisy images. Here we show that the impairment of face gender categorization performance in the case of noisy images in amblyopic patients correlates with amblyopic deficits measured in the noise-induced modulation of the P1/P2 components of single-trial event-related potentials (ERP). On the other hand, the N170 ERP component is similarly affected by the presence of noise in the two eyes and its modulation does not predict the behavioral deficit. These results have revealed that the efficient processing of noisy images depends on the engagement of additional processing resources both at the early, feature-specific as well as later, object-level stages of visual cortical processing reflected in the P1 and P2 ERP components, respectively. Our findings also suggest that noise-induced modulation of the N170 component might reflect diminished face-selective neuronal responses to face images with deteriorated structural information.  相似文献   

5.
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.  相似文献   

6.
Rapid detection of evolutionarily relevant threats (e.g., fearful faces) is important for human survival. The ability to rapidly detect fearful faces exhibits high variability across individuals. The present study aimed to investigate the relationship between behavioral detection ability and brain activity, using both event-related potential (ERP) and event-related oscillation (ERO) measurements. Faces with fearful or neutral facial expressions were presented for 17 ms or 200 ms in a backward masking paradigm. Forty-two participants were required to discriminate facial expressions of the masked faces. The behavioral sensitivity index d'' showed that the detection ability to rapidly presented and masked fearful faces varied across participants. The ANOVA analyses showed that the facial expression, hemisphere, and presentation duration affected the grand-mean ERP (N1, P1, and N170) and ERO (below 20 Hz and lasted from 100 ms to 250 ms post-stimulus, mainly in theta band) brain activity. More importantly, the overall detection ability of 42 subjects was significantly correlated with the emotion effect (i.e., fearful vs. neutral) on ERP (r = 0.403) and ERO (r = 0.552) measurements. A higher d'' value was corresponding to a larger size of the emotional effect (i.e., fearful – neutral) of N170 amplitude and a larger size of the emotional effect of the specific ERO spectral power at the right hemisphere. The present results suggested a close link between behavioral detection ability and the N170 amplitude as well as the ERO spectral power below 20 Hz in individuals. The emotional effect size between fearful and neutral faces in brain activity may reflect the level of conscious awareness of fearful faces.  相似文献   

7.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

8.
Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain–computer interface research.  相似文献   

9.
Using a rapid serial visual presentation paradigm, we previously showed that the average amplitudes of six event-related potential (ERP) components were affected by different categories of emotional faces. In the current study, we investigated the six discriminating components on a single-trial level to clarify whether the amplitude difference between experimental conditions results from a difference in the real variability of single-trial amplitudes or from latency jitter across trials. It is found that there were consistent amplitude differences in the single-trial P1, N170, VPP, N3, and P3 components, demonstrating that a substantial proportion of the average amplitude differences can be explained by the pure variability in amplitudes on a single-trial basis between experimental conditions. These single-trial results verified the three-stage scheme of facial expression processing beyond multitrial ERP averaging, and showed the three processing stages of "fear popup", "emotional/unemotional discrimination", and "complete separation" based on the single-trial ERP dynamics.  相似文献   

10.
来自多方面的研究表明,面孔的分类和识别位于特定脑区.同时,已有行为实验研究表明,图像的空间高低频特征在面孔分类的不同范畴中起不同的贡献,例如身份更多被低频信号传递,性别被高低频共同传递,而表情更多被高频传递.然而,空间频率在面孔分类中的贡献,其表征和神经机制目前相关研究很少.利用特定癫痫患者植入颅内电极的监控期,呈现不同类型面孔图像,同时记录其颅内脑电,用事件相关电位方法考察了据认为是面孔特定成分的相关电位的潜伏期在170 ms的波形(N170波形)的变化;用电极反应显著性分析考察了空间频率在不同分类特征上的贡献.结果表明,空间高频(HSF)图像的N170潜伏期显著延迟.只呈现空间低频(LSF)图像,N170的潜伏期对普通人面孔会延迟,而对熟悉的名人则没有这个差异.女性面孔诱发的N170在HSF条件下潜伏期明显晚于LSF条件,而男性面孔诱发的波形则不存在这个差异.表情在N170上没有体现出任何差异.但是基于电极的显著性分析表明,有更多的额叶电极参与了表情的加工;身份特征加工有更多电极在空间低频上表现出差异,而性别加工则空间高低频比较平衡.与以往行为结果不同的是,表情加工也有更多低频贡献,而且表情的差异可以在早达114 ms的时候就发生.这符合表情信息在颞枕区域有一个快速基本加工,再传递到其他脑区的认知模型.因此,空间高低频信息在身份和性别上的贡献,可能发生在经典的面孔加工脑区,由N170表达,表情信息不由N170表达,而是在颞枕较广泛的范围内快速加工再传递到别的脑区,如额叶.这是首次利用颅内脑电就空间频率在面孔分类中的贡献的神经机制进行研究,为深入理解脑内面孔各种特征加工的动态过程提供了一个新的切入点.  相似文献   

11.
Adult subjects were asked to recognize a hierarchical visual stimulus (a letter) while their attention was drawn to either the global or local level of the stimulus. Event-related potentials (ERP) and psychophysical indices (reaction time and percentage of correct responses) were measured. An analysis of psychophysical indices showed the global level precedence effect, i.e., the increase in a small letter recognition time when this letter is a part of incongruent stimulus. An analysis of ERP components showed level-related (global vs. local) differences in the timing and topography of the brain organization of perceptual processing and regulatory mechanisms of attention. Visual recognition at the local level was accompanied by (1) stronger activation of the visual associative areas (Pz and T6) at the stage of sensory features analysis (P1 ERP component), (2) involvement mainly of inferior temporal cortices of the right hemisphere (T6) at the stage of sensory categorization (P2 ERP component), and (3) involvement of prefrontal cortex of the right hemisphere at the stage of the selection of the relevant features of the target (N2 ERP component). Visual recognition at the global level was accompanied by (1) pronounced involvement of mechanisms of early sensory selection (N1 ERP component), (2) prevailing activation of parietal cortex of the right hemisphere (P4) at the stage of sensory categorization (P2 ERP component) as well as at the stage of the target stimulus identification (P3 ERP component). It is suggested that perception at the global level of the hierarchical stimulus is related primarily to the analysis of the spatial features of the stimulus in the dorsal visual system whereas the perception at the local level primarily involves an analysis of the object-related features in the ventral visual system.  相似文献   

12.
Repeated visual processing of an unfamiliar face suppresses neural activity in face-specific areas of the occipito-temporal cortex. This "repetition suppression" (RS) is a primitive mechanism involved in learning of unfamiliar faces, which can be detected through amplitude reduction of the N170 event-related potential (ERP). The dorsolateral prefrontal cortex (DLPFC) exerts top-down influence on early visual processing. However, its contribution to N170 RS and learning of unfamiliar faces remains unclear. Transcranial direct current stimulation (tDCS) transiently increases or decreases cortical excitability, as a function of polarity. We hypothesized that DLPFC excitability modulation by tDCS would cause polarity-dependent modulations of N170 RS during encoding of unfamiliar faces. tDCS-induced N170 RS enhancement would improve long-term recognition reaction time (RT) and/or accuracy rates, whereas N170 RS impairment would compromise recognition ability. Participants underwent three tDCS conditions in random order at ∼72 hour intervals: right anodal/left cathodal, right cathodal/left anodal and sham. Immediately following tDCS conditions, an EEG was recorded during encoding of unfamiliar faces for assessment of P100 and N170 visual ERPs. The P3a component was analyzed to detect prefrontal function modulation. Recognition tasks were administered ∼72 hours following encoding. Results indicate the right anodal/left cathodal condition facilitated N170 RS and induced larger P3a amplitudes, leading to faster recognition RT. Conversely, the right cathodal/left anodal condition caused N170 amplitude and RTs to increase, and a delay in P3a latency. These data demonstrate that DLPFC excitability modulation can influence early visual encoding of unfamiliar faces, highlighting the importance of DLPFC in basic learning mechanisms.  相似文献   

13.
采用事件相关电位技术研究了在视觉搜索过程中的外源易化和返回抑制(inhibition of return,IOR)的相互关系。当外源注意保持在序列搜索过的位置上时,有一个延时反应(即IOR),伴随其产生的相关脑电成分有:分布在后顶的潜伏期为200 ms 的正差异、分布在前额叶内侧靠左的潜伏期为240 毫秒的负差异,以及分布在两侧颞顶联合区的潜伏期为280 ms 的负差异。而当外源注意保持在平行搜索的位置上时,则出现了明显的易化效应,伴随其产生的脑电成分仅为分布在枕顶区域的潜伏期为280 ms 的负差异。这些结果表明,外源易化和IOR 涉及了不同的脑区和神经过程,从而支持两者在机制上是可分离性的观点。  相似文献   

14.
Hemodynamic imaging results have associated both gender and body weight to variation in brain responses to food-related information. However, the spatio-temporal brain dynamics of gender-related and weight-wise modulations in food discrimination still remain to be elucidated. We analyzed visual evoked potentials (VEPs) while normal-weighted men (n = 12) and women (n = 12) categorized photographs of energy-dense foods and non-food kitchen utensils. VEP analyses showed that food categorization is influenced by gender as early as 170 ms after image onset. Moreover, the female VEP pattern to food categorization co-varied with participants' body weight. Estimations of the neural generator activity over the time interval of VEP modulations (i.e. by means of a distributed linear inverse solution [LAURA]) revealed alterations in prefrontal and temporo-parietal source activity as a function of image category and participants' gender. However, only neural source activity for female responses during food viewing was negatively correlated with body-mass index (BMI) over the respective time interval. Women showed decreased neural source activity particularly in ventral prefrontal brain regions when viewing food, but not non-food objects, while no such associations were apparent in male responses to food and non-food viewing. Our study thus indicates that gender influences are already apparent during initial stages of food-related object categorization, with small variations in body weight modulating electrophysiological responses especially in women and in brain areas implicated in food reward valuation and intake control. These findings extend recent reports on prefrontal reward and control circuit responsiveness to food cues and the potential role of this reactivity pattern in the susceptibility to weight gain.  相似文献   

15.
Physical exercise and the training effects of repeated practice of skills over an extended period of time may have additive effects on brain networks and functions. Various motor skills and attentional styles can be developed by athletes engaged in different sports. In this study, the effects of fast ball sports and dance training on attention were investigated by event related potentials (ERP). ERP were recorded in auditory and visual tasks in professional dancer, professional fast ball sports athlete (FBSA) and healthy control volunteer groups consisting of twelve subjects each. In the auditory task both dancer and FBSA groups have faster N200 (N2) and P300 (P3) latencies than the controls. In the visual task FBSA have faster latencies of P3 than the dancers and controls. They also have higher P100 (P1) amplitudes to non-target stimuli than the dancers and controls. On the other hand, dancers have faster latencies of P1 and higher N100 (N1) amplitude to non-target stimuli and they also have higher P3 amplitudes than the FBSA and controls. Overall exercise has positive effects on cognitive processing speed as reflected on the faster auditory N2 and P3 latencies. However, FBSA and dancers differed on attentional styles in the visual task. Dancers displayed predominantly endogenous/top down features reflected by increased N1 and P3 amplitudes, decreased P1 amplitude and shorter P1 latency. On the other hand, FBSA showed predominantly exogenous/bottom up processes revealed by increased P1 amplitude. The controls were in between the two groups.  相似文献   

16.
There appears to be a significant disconnect between symptomatic and functional recovery in bipolar disorder (BD). Some evidence points to interepisode cognitive dysfunction. We tested the hypothesis that some of this dysfunction was related to emotional reactivity in euthymic bipolar subjects may effect cognitive processing. A modification of emotional gender categorization oddball task was used. The target was gender (probability 25%) of faces with negative, positive, and neutral emotional expression. The experiment had 720 trials (3 blocks × 240 trials each). Each stimulus was presented for 150 ms, and the EEG/ERP responses were recorded for 1,000 ms. The inter-trial interval was varied in 1,100–1,500 ms range to avoid expectancy effects. Task took about 35 min to complete. There were 9 BD and 9 control subjects matched for age and gender. Reaction time (RT) was globally slower in BD subjects. The centro-parietal amplitudes at N170 and N200, and P200 and P300 were generally smaller in the BD group compared to controls. Latency was shorter to neutral and negative targets in BD. Frontal P200 amplitude was higher to emotional negative facial non-targets in BD subjects. The frontal N200 in response to positive facial emotion was less negative in BD subjects. The frontal P300 of BD subjects was lower to emotionally neutral targets. ERP responses to facial emotion in BD subjects varied significantly from normal controls. These variations are consistent with the common depressive symptomology seen in long term studies of bipolar subjects.  相似文献   

17.
The photoreceptor cells of the nocturnal spider Cupiennius salei were investigated by intracellular electrophysiology. (1) The responses of photoreceptor cells of posterior median (PM) and anterior median (AM) eyes to short (2 ms) light pulses showed long integration times in the dark-adapted and shorter integration times in the light-adapted state. (2) At very low light intensities, the photoreceptors responded to single photons with discrete potentials, called bumps, of high amplitude (2–20 mV). When measured in profoundly dark-adapted photoreceptor cells of the PM eyes these bumps showed an integration time of 128 ± 35 ms (n = 7) whereas in dark-adapted photoreceptor cells of AM eyes the integration time was 84 ± 13 ms (n = 8), indicating that the AM eyes are intrinsically faster than the PM eyes. (3) Long integration times, which improve visual reliability in dim light, and large responses to single photons in the dark-adapted state, contribute to a high visual sensitivity in Cupiennius at night. This conclusion is underlined by a calculation of sensitivity that accounts for both anatomical and physiological characteristics of the eye.  相似文献   

18.
通过结合具有高空间分辨率的功能磁共振成像(fMRI)和具有高时间分辨率的128导脑电事件相关电位(ERP)两项技术,测量了视皮层腹侧区域对图形形状识别任务反应的空间定位和时间过程。fMRI的实验结果表明,图形的形状和觉引起了腹测GTi/GF皮层区域的兴奋。进一步,基于fMRI兴奋区域的种子偶极子模型拟合的的ERP动态定位分析的结果和自由运动的偶极子模型拟合的ERP定位分析结果表明:GTi/GF区域活动的时间发生在刺激呈现之后132-176ms时间段,峰值150ms左右,相应于ERP的N1成分。这些结果在人类大脑皮层上同时确定了视觉通路中涉及图形形状识别的兴奋区域和兴奋的时间过程。  相似文献   

19.

Background

It is well known that facial expressions represent important social cues. In humans expressing facial emotion, fear may be configured to maximize sensory exposure (e.g., increases visual input) whereas disgust can reduce sensory exposure (e.g., decreases visual input). To investigate whether such effects also extend to the attentional system, we used the “attentional blink” (AB) paradigm. Many studies have documented that the second target (T2) of a pair is typically missed when presented within a time window of about 200–500 ms from the first to-be-detected target (T1; i.e., the AB effect). It has recently been proposed that the AB effect depends on the efficiency of a gating system which facilitates the entrance of relevant input into working memory, while inhibiting irrelevant input. Following the inhibitory response on post T1 distractors, prolonged inhibition of the subsequent T2 is observed. In the present study, we hypothesized that processing facial expressions of emotion would influence this attentional gating. Fearful faces would increase but disgust faces would decrease inhibition of the second target.

Methodology/Principal Findings

We showed that processing fearful versus disgust faces has different effects on these attentional processes. We found that processing fear faces impaired the detection of T2 to a greater extent than did the processing disgust faces. This finding implies emotion-specific modulation of attention.

Conclusions/Significance

Based on the recent literature on attention, our finding suggests that processing fear-related stimuli exerts greater inhibitory responses on distractors relative to processing disgust-related stimuli. This finding is of particular interest for researchers examining the influence of emotional processing on attention and memory in both clinical and normal populations. For example, future research could extend upon the current study to examine whether inhibitory processes invoked by fear-related stimuli may be the mechanism underlying the enhanced learning of fear-related stimuli.  相似文献   

20.
Audiovisual integration of letters in the human brain   总被引:5,自引:0,他引:5  
Raij T  Uutela K  Hari R 《Neuron》2000,28(2):617-625
Letters of the alphabet have auditory (phonemic) and visual (graphemic) qualities. To investigate the neural representations of such audiovisual objects, we recorded neuromagnetic cortical responses to auditorily, visually, and audiovisually presented single letters. The auditory and visual brain activations first converged around 225 ms after stimulus onset and then interacted predominantly in the right temporo-occipito-parietal junction (280345 ms) and the left (380-540 ms) and right (450-535 ms) superior temporal sulci. These multisensory brain areas, playing a role in audiovisual integration of phonemes and graphemes, participate in the neural network supporting the supramodal concept of a "letter." The dynamics of these functions bring new insight into the interplay between sensory and association cortices during object recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号