首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.  相似文献   

3.
This study investigates the spatiotemporal dynamics associated with conscious and non-conscious processing of naked and dressed human bodies. To this effect, stimuli of naked men and women with visible primary sexual characteristics, as well as dressed bodies, were presented to 20 heterosexual male and female participants while acquiring high resolution EEG data. The stimuli were either consciously detectable (supraliminal presentations) or were rendered non-conscious through backward masking (subliminal presentations). The N1 event-related potential component was significantly enhanced in participants when they viewed naked compared to dressed bodies under supraliminal viewing conditions. More importantly, naked bodies of the opposite sex produced a significantly greater N1 component compared to dressed bodies during subliminal presentations, when participants were not aware of the stimulus presented. A source localization algorithm computed on the N1 showed that the response for naked bodies in the supraliminal viewing condition was stronger in body processing areas, primary visual areas and additional structures related to emotion processing. By contrast, in the subliminal viewing condition, only visual and body processing areas were found to be activated. These results suggest that naked bodies and primary sexual characteristics are processed early in time (i.e., <200 ms) and activate key brain structures even when they are not consciously detected. It appears that, similarly to what has been reported for emotional faces, sexual features benefit from automatic and rapid processing, most likely due to their high relevance for the individual and their importance for the species in terms of reproductive success.  相似文献   

4.
Antisocial individuals are characterized to display self-determined and inconsiderate behavior during social interaction. Furthermore, recognition deficits regarding fearful facial expressions have been observed in antisocial populations. These observations give rise to the question whether or not antisocial behavioral tendencies are associated with deficits in basic processing of social cues. The present study investigated early visual stimulus processing of social stimuli in a group of healthy female individuals with antisocial behavioral tendencies compared to individuals without these tendencies while measuring event-related potentials (P1, N170). To this end, happy and angry faces served as feedback stimuli which were embedded in a gambling task. Results showed processing differences as early as 88–120 ms after feedback onset. Participants low on antisocial traits displayed larger P1 amplitudes than participants high on antisocial traits. No group differences emerged for N170 amplitudes. Attention allocation processes, individual arousal levels as well as face processing are discussed as possible causes of the observed group differences in P1 amplitudes. In summary, the current data suggest that sensory processing of facial stimuli is functionally intact but less ready to respond in healthy individuals with antisocial tendencies.  相似文献   

5.
Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region.  相似文献   

6.
Faces are visual objects that hold special significance as the icons of other minds. Previous researchers using event-related potentials (ERPs) have found that faces are uniquely associated with an increased N170/vertex positive potential (VPP) and a more sustained frontal positivity. Here, we examined the processing of faces as objects vs. faces as cues to minds by contrasting images of faces possessing minds (human faces), faces lacking minds (doll faces), and non-face objects (i.e., clocks). Although both doll and human faces were associated with an increased N170/VPP from 175-200 ms following stimulus onset, only human faces were associated with a sustained positivity beyond 400 ms. Our data suggest that the N170/VPP reflects the object-based processing of faces, whether of dolls or humans; on the other hand, the later positivity appears to uniquely index the processing of human faces--which are more salient and convey information about identity and the presence of other minds.  相似文献   

7.
The ability to process and identify human faces matures early in life, is universal and is mediated by a distributed neural system. The temporal dynamics of this cognitive-emotional task can be studied by cerebral visual event-related potentials (ERPs) that are stable from midchildhood onwards. We hypothesized that part of individual variability in the parameters of the N170, a waveform that specifically marks the early, precategorical phases of human face processing, could be associated with genetic variation at the functional polymorphism of the catechol-O-methyltransferase (val(158)met) gene, which influences information processing, cognitive control tasks and patterns of brain activation during passive processing of human facial stimuli. Forty-nine third and fourth graders underwent a task of implicit processing of other children's facial expressions of emotions while ERPs were recorded. The N170 parameters (latency and amplitude) were insensitive to the type of expression, stimulus repetition, gender or school grade. Although limited by the absence of met- homozygotes among boys, data showed shorter N170 latency associated with the presence of 1-2 met158 alleles, and family-based association tests (as implemented in the PBAT version 2.6 software package) confirmed the association. These data were independent of the serotonin transporter promoter polymorphism and the N400 waveform investigated in the same group of children in a previous study. Some electrophysiological features of face processing may be stable from midchildhood onwards. Different waveforms generated by face processing may have at least partially independent genetic architectures and yield different implications toward the understanding of individual differences in cognition and emotions.  相似文献   

8.
Repeated visual processing of an unfamiliar face suppresses neural activity in face-specific areas of the occipito-temporal cortex. This "repetition suppression" (RS) is a primitive mechanism involved in learning of unfamiliar faces, which can be detected through amplitude reduction of the N170 event-related potential (ERP). The dorsolateral prefrontal cortex (DLPFC) exerts top-down influence on early visual processing. However, its contribution to N170 RS and learning of unfamiliar faces remains unclear. Transcranial direct current stimulation (tDCS) transiently increases or decreases cortical excitability, as a function of polarity. We hypothesized that DLPFC excitability modulation by tDCS would cause polarity-dependent modulations of N170 RS during encoding of unfamiliar faces. tDCS-induced N170 RS enhancement would improve long-term recognition reaction time (RT) and/or accuracy rates, whereas N170 RS impairment would compromise recognition ability. Participants underwent three tDCS conditions in random order at ∼72 hour intervals: right anodal/left cathodal, right cathodal/left anodal and sham. Immediately following tDCS conditions, an EEG was recorded during encoding of unfamiliar faces for assessment of P100 and N170 visual ERPs. The P3a component was analyzed to detect prefrontal function modulation. Recognition tasks were administered ∼72 hours following encoding. Results indicate the right anodal/left cathodal condition facilitated N170 RS and induced larger P3a amplitudes, leading to faster recognition RT. Conversely, the right cathodal/left anodal condition caused N170 amplitude and RTs to increase, and a delay in P3a latency. These data demonstrate that DLPFC excitability modulation can influence early visual encoding of unfamiliar faces, highlighting the importance of DLPFC in basic learning mechanisms.  相似文献   

9.
Men and women exhibit different neural, genital, and subjective arousal responses to visual sexual stimuli. The source of these sex differences is unknown. We hypothesized that men and women look differently at sexual stimuli, resulting in different responses. We used eye tracking to measure looking by 15 male and 30 female (15 normal cycling (NC) and 15 oral contracepting (OC)) heterosexual adults viewing sexually explicit photos. NC Women were tested during their menstrual, periovulatory, and luteal phases while Men and OC Women were tested at equivalent intervals, producing three test sessions per individual. Men, NC, and OC Women differed in the relative amounts of first looks towards, percent time looking at, and probability of looking at, defined regions of the pictures. Men spent more time, and had a higher probability of, looking at female faces. NC Women had more first looks towards, spent more time, and had a higher probability of, looking at genitals. OC Women spent more time, and had a higher probability of, looking at contextual regions of pictures, those featuring clothing or background. Groups did not differ in looking at the female body. Menstrual cycle phase did not affect women's looking patterns. However, differences between OC and NC groups suggest hormonal influences on attention to sexual stimuli that were unexplained by subject characteristic differences. Our finding that men and women attend to different aspects of the same visual sexual stimuli could reflect pre-existing cognitive biases that possibly contribute to sex differences in neural, subjective, and physiological arousal.  相似文献   

10.

Background

The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking.

Methodology/Principal Findings

In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment.

Conclusions/Significance

Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species.  相似文献   

11.

Background

Selective visual attention is the process by which the visual system enhances behaviorally relevant stimuli and filters out others. Visual attention is thought to operate through a cortical mechanism known as biased competition. Representations of stimuli within cortical visual areas compete such that they mutually suppress each others'' neural response. Competition increases with stimulus proximity and can be biased in favor of one stimulus (over another) as a function of stimulus significance, salience, or expectancy. Though there is considerable evidence of biased competition within the human visual system, the dynamics of the process remain unknown.

Methodology/Principal Findings

Here, we used scalp-recorded electroencephalography (EEG) to examine neural correlates of biased competition in the human visual system. In two experiments, subjects performed a task requiring them to either simultaneously identify two targets (Experiment 1) or discriminate one target while ignoring a decoy (Experiment 2). Competition was manipulated by altering the spatial separation between target(s) and/or decoy. Both experimental tasks should induce competition between stimuli. However, only the task of Experiment 2 should invoke a strong bias in favor of the target (over the decoy). The amplitude of two lateralized components of the event-related potential, the N2pc and Ptc, mirrored these predictions. N2pc amplitude increased with increasing stimulus separation in Experiments 1 and 2. However, Ptc amplitude varied only in Experiment 2, becoming more positive with decreased spatial separation.

Conclusions/Significance

These results suggest that N2pc and Ptc components may index distinct processes of biased competition—N2pc reflecting visual competitive interactions and Ptc reflecting a bias in processing necessary to individuate task-relevant stimuli.  相似文献   

12.
Visual categorization may already start within the first 100-ms after stimulus onset, in contrast with the long-held view that during this early stage all complex stimuli are processed equally and that category-specific cortical activation occurs only at later stages. The neural basis of this proposed early stage of high-level analysis is however poorly understood. To address this question we used magnetoencephalography and anatomically-constrained distributed source modeling to monitor brain activity with millisecond-resolution while subjects performed an orientation task on the upright and upside-down presented images of three different stimulus categories: faces, houses and bodies. Significant inversion effects were found for all three stimulus categories between 70-100-ms after picture onset with a highly category-specific cortical distribution. Differential responses between upright and inverted faces were found in well-established face-selective areas of the inferior occipital cortex and right fusiform gyrus. In addition, early category-specific inversion effects were found well beyond visual areas. Our results provide the first direct evidence that category-specific processing in high-level category-sensitive cortical areas already takes place within the first 100-ms of visual processing, significantly earlier than previously thought, and suggests the existence of fast category-specific neocortical routes in the human brain.  相似文献   

13.
Functional magnetic resonance imaging indicates that observation of the human body induces a selective activation of a lateral occipitotemporal cortical area called extrastriate body area (EBA). This area is responsive to static and moving images of the human body and parts of it, but it is insensitive to faces and stimulus categories unrelated to the human body. With event-related repetitive transcranial magnetic stimulation, we tested the possible causal relation between neural activity in EBA and visual processing of body-related, nonfacial stimuli. Facial and noncorporeal stimuli were used as a control. Interference with neural activity in EBA induced a clear impairment, consisting of a significant increase in discriminative reaction time, in the visual processing of body parts. The effect was selective for stimulus type, because it affected responses to nonfacial body stimuli but not to noncorporeal and facial stimuli, and for locus of stimulation, because the effect from the interfering stimulation of EBA was absent during a corresponding stimulation of primary visual cortex. The results provide strong evidence that neural activity in EBA is not only correlated with but also causally involved in the visual processing of the human body and its parts, except the face.  相似文献   

14.

Background

Some studies have reported gender differences in N170, a face-selective event-related potential (ERP) component. This study investigated gender differences in N170 elicited under oddball paradigm in order to clarify the effect of task demand on gender differences in early facial processing.

Findings

Twelve males and 10 females discriminated targets (emotional faces) from non-targets (emotionally neutral faces) under an oddball paradigm, pressing a button as quickly as possible in response to the target. Clear N170 was elicited in response to target and non-target stimuli in both males and females. However, females showed more negative amplitude of N170 in response to target compared with non-target, while males did not show different N170 responses between target and non-target.

Conclusions

The present results suggest that females have a characteristic of allocating attention at an early stage when responding to faces actively (target) compared to viewing faces passively (non-target). This supports previous findings suggesting that task demand is an important factor in gender differences in N170.  相似文献   

15.
Congenital prosopagnosia is lifelong face-recognition impairment in the absence of evidence for structural brain damage. To study the neural correlates of congenital prosopagnosia, we measured the face-sensitive N170 component of the event-related potential in three members of the same family (father (56 y), son (25 y) and daughter (22 y)) and in age-matched neurotypical participants (young controls: n = 14; 24.5 y±2.1; old controls: n = 6; 57.3 y±5.4). To compare the face sensitivity of N170 in congenital prosopagnosic and neurotypical participants we measured the event-related potentials for faces and phase-scrambled random noise stimuli. In neurotypicals we found significantly larger N170 amplitude for faces compared to noise stimuli, reflecting normal early face processing. The congenital prosopagnosic participants, by contrast, showed reduced face sensitivity of the N170, and this was due to a larger than normal noise-elicited N170, rather than to a smaller face-elicited N170. Interestingly, single-trial analysis revealed that the lack of face sensitivity in congenital prosopagnosia is related to a larger oscillatory power and phase-locking in the theta frequency-band (4–7 Hz, 130–190 ms) as well as to a lower intertrial jitter of the response latency for the noise stimuli. Altogether, these results suggest that congenital prosopagnosia is due to the deficit of early, structural encoding steps of face perception in filtering between face and non-face stimuli.  相似文献   

16.
In 30 healthy subjects and 32 patients after the first episode of schizophrenia 19 channel-EEG was recorded during visual presentation of a random sequence of words and pseudo-words. In the first series of the experiments, subjects had to read the presented verbal stimuli, in the second series they had to press a button when seeing a word, and in the third series they were instructed to press the button when seeing a pseudo-word. We studied components N170, P300 and N400. In the group of healthy subjects, the amplitude of N170 increased to words in the situation of their relevance, which corresponds to the "recognition potential", whereas in the group of patients, the amplitude of N170 increased to pseudo-words when they were relevant. So it was a paradoxical response. The amplitude of the ERP later waves (P300 and N400) in the group of schizophrenic patients was smaller and the relevance effect was impaired when the target stimuli were pseudo-words. However, the incongruity effect consisting in an increase in N400 amplitude to a non-target stimulus remained intact in patients.  相似文献   

17.
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.  相似文献   

18.
The N170 component is considered a neural marker of face-sensitive processing. In the present study, the face-sensitive N170 component of event-related potentials (ERPs) was investigated with a modified oddball paradigm using a natural face (the standard stimulus), human- and animal-like makeup stimuli, scrambled control images that mixed human- and animal-like makeup pieces, and a grey control image. Nineteen participants were instructed to respond within 1000 ms by pressing the ‘F’ or ‘J’ key in response to the standard or deviant stimuli, respectively. We simultaneously recorded ERPs, response accuracy, and reaction times. The behavioral results showed that the main effect of stimulus type was significant for reaction time, whereas there were no significant differences in response accuracies among stimulus types. In relation to the ERPs, N170 amplitudes elicited by human-like makeup stimuli, animal-like makeup stimuli, scrambled control images, and a grey control image progressively decreased. A right hemisphere advantage was observed in the N170 amplitudes for human-like makeup stimuli, animal-like makeup stimuli, and scrambled control images but not for grey control image. These results indicate that the N170 component is sensitive to face-like stimuli and reflect configural processing in face recognition.  相似文献   

19.
A key to understanding visual cognition is to determine when, how, and with what information the human brain distinguishes between visual categories. So far, the dynamics of information processing for categorization of visual stimuli has not been elucidated. By using an ecologically important categorization task (seven expressions of emotion), we demonstrate, in three human observers, that an early brain event (the N170 Event Related Potential, occurring 170 ms after stimulus onset) integrates visual information specific to each expression, according to a pattern. Specifically, starting 50 ms prior to the ERP peak, facial information tends to be integrated from the eyes downward in the face. This integration stops, and the ERP peaks, when the information diagnostic for judging a particular expression has been integrated (e.g., the eyes in fear, the corners of the nose in disgust, or the mouth in happiness). Consequently, the duration of information integration from the eyes down determines the latency of the N170 for each expression (e.g., with "fear" being faster than "disgust," itself faster than "happy"). For the first time in visual categorization, we relate the dynamics of an important brain event to the dynamics of a precise information-processing function.  相似文献   

20.
Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号