首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study tested whether neural sensitivity to salient emotional facial expressions was influenced by emotional expectations induced by a cue that validly predicted the expression of a subsequently presented target face. Event-related potentials (ERPs) elicited by fearful and neutral faces were recorded while participants performed a gender discrimination task under cued (‘expected’) and uncued (‘unexpected’) conditions. The behavioral results revealed that accuracy was lower for fearful compared with neutral faces in the unexpected condition, while accuracy was similar for fearful and neutral faces in the expected condition. ERP data revealed increased amplitudes in the P2 component and 200–250 ms interval for unexpected fearful versus neutral faces. By contrast, ERP responses were similar for fearful and neutral faces in the expected condition. These findings indicate that human neural sensitivity to fearful faces is modulated by emotional expectations. Although the neural system is sensitive to unpredictable emotionally salient stimuli, sensitivity to salient stimuli is reduced when these stimuli are predictable.  相似文献   

2.
Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition.  相似文献   

3.

Background

The present study sought to clarify the relationship between empathy trait and attention responses to happy, angry, surprised, afraid, and sad facial expressions. As indices of attention, we recorded event-related potentials (ERP) and focused on N170 and late positive potential (LPP) components.

Methods

Twenty-two participants (12 males, 10 females) discriminated facial expressions (happy, angry, surprised, afraid, and sad) from emotionally neutral faces under an oddball paradigm. The empathy trait of participants was measured using the Interpersonal Reactivity Index (IRI, J Pers Soc Psychol 44:113–126, 1983).

Results

Participants with higher IRI scores showed: 1) more negative amplitude of N170 (140 to 200 ms) in the right posterior temporal area elicited by happy, angry, surprised, and afraid faces; 2) more positive amplitude of early LPP (300 to 600 ms) in the parietal area elicited in response to angry and afraid faces; and 3) more positive amplitude of late LPP (600 to 800 ms) in the frontal area elicited in response to happy, angry, surprised, afraid, and sad faces, compared to participants with lower IRI scores.

Conclusions

These results suggest that individuals with high empathy pay attention to various facial expressions more than those with low empathy, from very-early stage (reflected in N170) to late-stage (reflected in LPP) processing of faces.  相似文献   

4.
There appears to be a significant disconnect between symptomatic and functional recovery in bipolar disorder (BD). Some evidence points to interepisode cognitive dysfunction. We tested the hypothesis that some of this dysfunction was related to emotional reactivity in euthymic bipolar subjects may effect cognitive processing. A modification of emotional gender categorization oddball task was used. The target was gender (probability 25%) of faces with negative, positive, and neutral emotional expression. The experiment had 720 trials (3 blocks × 240 trials each). Each stimulus was presented for 150 ms, and the EEG/ERP responses were recorded for 1,000 ms. The inter-trial interval was varied in 1,100–1,500 ms range to avoid expectancy effects. Task took about 35 min to complete. There were 9 BD and 9 control subjects matched for age and gender. Reaction time (RT) was globally slower in BD subjects. The centro-parietal amplitudes at N170 and N200, and P200 and P300 were generally smaller in the BD group compared to controls. Latency was shorter to neutral and negative targets in BD. Frontal P200 amplitude was higher to emotional negative facial non-targets in BD subjects. The frontal N200 in response to positive facial emotion was less negative in BD subjects. The frontal P300 of BD subjects was lower to emotionally neutral targets. ERP responses to facial emotion in BD subjects varied significantly from normal controls. These variations are consistent with the common depressive symptomology seen in long term studies of bipolar subjects.  相似文献   

5.
Antisocial individuals are characterized to display self-determined and inconsiderate behavior during social interaction. Furthermore, recognition deficits regarding fearful facial expressions have been observed in antisocial populations. These observations give rise to the question whether or not antisocial behavioral tendencies are associated with deficits in basic processing of social cues. The present study investigated early visual stimulus processing of social stimuli in a group of healthy female individuals with antisocial behavioral tendencies compared to individuals without these tendencies while measuring event-related potentials (P1, N170). To this end, happy and angry faces served as feedback stimuli which were embedded in a gambling task. Results showed processing differences as early as 88–120 ms after feedback onset. Participants low on antisocial traits displayed larger P1 amplitudes than participants high on antisocial traits. No group differences emerged for N170 amplitudes. Attention allocation processes, individual arousal levels as well as face processing are discussed as possible causes of the observed group differences in P1 amplitudes. In summary, the current data suggest that sensory processing of facial stimuli is functionally intact but less ready to respond in healthy individuals with antisocial tendencies.  相似文献   

6.
Congenital prosopagnosia is lifelong face-recognition impairment in the absence of evidence for structural brain damage. To study the neural correlates of congenital prosopagnosia, we measured the face-sensitive N170 component of the event-related potential in three members of the same family (father (56 y), son (25 y) and daughter (22 y)) and in age-matched neurotypical participants (young controls: n = 14; 24.5 y±2.1; old controls: n = 6; 57.3 y±5.4). To compare the face sensitivity of N170 in congenital prosopagnosic and neurotypical participants we measured the event-related potentials for faces and phase-scrambled random noise stimuli. In neurotypicals we found significantly larger N170 amplitude for faces compared to noise stimuli, reflecting normal early face processing. The congenital prosopagnosic participants, by contrast, showed reduced face sensitivity of the N170, and this was due to a larger than normal noise-elicited N170, rather than to a smaller face-elicited N170. Interestingly, single-trial analysis revealed that the lack of face sensitivity in congenital prosopagnosia is related to a larger oscillatory power and phase-locking in the theta frequency-band (4–7 Hz, 130–190 ms) as well as to a lower intertrial jitter of the response latency for the noise stimuli. Altogether, these results suggest that congenital prosopagnosia is due to the deficit of early, structural encoding steps of face perception in filtering between face and non-face stimuli.  相似文献   

7.
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.  相似文献   

8.
Effective processing of threat-related stimuli is of significant evolutionary advantage. Given the intricate relationship between attention and the neural processing of threat-related emotions, this study manipulated attention allocation and emotional categories of threat-related stimuli as independent factors and investigated the time course of spatial-attention-modulated processing of disgusting and fearful stimuli. The participants were instructed to direct their attention either to the two vertical or to the two horizontal locations, where two faces and two houses would be presented. The task was to respond regarding the physical identity of the two stimuli at cued locations. Event-related potentials (ERP) evidences were found to support a two-stage model of attention-modulated processing of threat-related emotions. In the early processing stage, disgusted faces evoked larger P1 component at right occipital region despite the attention allocation while larger N170 component was elicited by fearful faces at right occipito-temporal region only when participants attended to houses. In the late processing stage, the amplitudes of the parietal P3 component enhanced for both disgusted and fearful facial expressions only when the attention was focused on faces. According to the results, we propose that the temporal dynamics of the emotion-by-attention interaction consist of two stages. The early stage is characterized by quick and specialized neural encoding of disgusting and fearful stimuli irrespective of voluntary attention allocation, indicating an automatic detection and perception of threat-related emotions. The late stage is represented by attention-gated separation between threat-related stimuli and neutral stimuli; the similar ERP pattern evoked by disgusted and fearful faces suggests a more generalized processing of threat-related emotions via top-down attentional modulation, based on which the defensive behavior in response to threat events is largely facilitated.  相似文献   

9.

Background

Adults with bipolar disorder (BD) have cognitive impairments that affect face processing and social cognition. However, it remains unknown whether these deficits in euthymic BD have impaired brain markers of emotional processing.

Methodology/Principal Findings

We recruited twenty six participants, 13 controls subjects with an equal number of euthymic BD participants. We used an event-related potential (ERP) assessment of a dual valence task (DVT), in which faces (angry and happy), words (pleasant and unpleasant), and face-word simultaneous combinations are presented to test the effects of the stimulus type (face vs word) and valence (positive vs. negative). All participants received clinical, neuropsychological and social cognition evaluations. ERP analysis revealed that both groups showed N170 modulation of stimulus type effects (face > word). BD patients exhibited reduced and enhanced N170 to facial and semantic valence, respectively. The neural source estimation of N170 was a posterior section of the fusiform gyrus (FG), including the face fusiform area (FFA). Neural generators of N170 for faces (FG and FFA) were reduced in BD. In these patients, N170 modulation was associated with social cognition (theory of mind).

Conclusions/Significance

This is the first report of euthymic BD exhibiting abnormal N170 emotional discrimination associated with theory of mind impairments.  相似文献   

10.

Background

Some studies have reported gender differences in N170, a face-selective event-related potential (ERP) component. This study investigated gender differences in N170 elicited under oddball paradigm in order to clarify the effect of task demand on gender differences in early facial processing.

Findings

Twelve males and 10 females discriminated targets (emotional faces) from non-targets (emotionally neutral faces) under an oddball paradigm, pressing a button as quickly as possible in response to the target. Clear N170 was elicited in response to target and non-target stimuli in both males and females. However, females showed more negative amplitude of N170 in response to target compared with non-target, while males did not show different N170 responses between target and non-target.

Conclusions

The present results suggest that females have a characteristic of allocating attention at an early stage when responding to faces actively (target) compared to viewing faces passively (non-target). This supports previous findings suggesting that task demand is an important factor in gender differences in N170.  相似文献   

11.
Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.  相似文献   

12.
In a dual-task paradigm, participants performed a spatial location working memory task and a forced two-choice perceptual decision task (neutral vs. fearful) with gradually morphed emotional faces (neutral ∼ fearful). Task-irrelevant word distractors (negative, neutral, and control) were experimentally manipulated during spatial working memory encoding. We hypothesized that, if affective perception is influenced by concurrent cognitive load using a working memory task, task-irrelevant emotional distractors would bias subsequent perceptual decision-making on ambiguous facial expression. We found that when either neutral or negative emotional words were presented as task-irrelevant working-memory distractors, participants more frequently reported fearful face perception - but only at the higher emotional intensity levels of morphed faces. Also, the affective perception bias due to negative emotional distractors correlated with a decrease in working memory performance. Taken together, our findings suggest that concurrent working memory load by task-irrelevant distractors has an impact on affective perception of facial expressions.  相似文献   

13.
Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.  相似文献   

14.
Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.  相似文献   

15.
The emotions people feel can be simulated internally based on emotional situational contexts. In the present study, we assessed the behavioral and neuroelectric effects of seeing an unexpected emotional facial expression. We investigated the correct answer rate, response times and Event-Related Potential (ERP) effects during an incongruence paradigm between emotional faces and sentential contexts allowing emotional inferences. Most of the 36 healthy participants were recruited from a larger population (1 463 subjects), based on their scores on the Empathy Questionnaire (EQ). Regression analyses were conducted on these ratings using EQ factors as predictors (cognitive empathy, emotional reactivity and social skills). Recognition of pragmatic emotional incongruence was less accurate (P < .05) and slower (P < .05) than recognition of congruence. The incongruence effect on response times was inversely predicted by social skills. A significant N400 incongruence effect was found at the centro-parietal (P < .001) and centro-posterior midline (P < .01) electrodes. Cognitive empathy predicted the incongruence effect in the left occipital region, in the N400 time window. Finally, incongruence effects were also found on the LPP wave, in frontal midline and dorso-frontal regions, (P < .05), with no modulation by empathy. Processing pragmatic emotional incongruence is more cognitively demanding than congruence (as reflected by both behavioral and ERP data). This processing shows modulation by personality factors at the behavioral (through self-reported social skills) and neuroelectric levels (through self-reported cognitive empathy).  相似文献   

16.
A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as ‘emotional superiority’). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands.  相似文献   

17.

Background

Research suggests that individuals with different attachment patterns process social information differently, especially in terms of facial emotion recognition. However, few studies have explored social information processes in adolescents. This study examined the behavioral and ERP correlates of emotional processing in adolescents with different attachment orientations (insecure attachment group and secure attachment group; IAG and SAG, respectively). This study also explored the association of these correlates to individual neuropsychological profiles.

Methodology/Principal Findings

We used a modified version of the dual valence task (DVT), in which participants classify stimuli (faces and words) according to emotional valence (positive or negative). Results showed that the IAG performed significantly worse than SAG on tests of executive function (EF attention, processing speed, visuospatial abilities and cognitive flexibility). In the behavioral DVT, the IAG presented lower performance and accuracy. The IAG also exhibited slower RTs for stimuli with negative valence. Compared to the SAG, the IAG showed a negative bias for faces; a larger P1 and attenuated N170 component over the right hemisphere was observed. A negative bias was also observed in the IAG for word stimuli, which was demonstrated by comparing the N170 amplitude of the IAG with the valence of the SAG. Finally, the amplitude of the N170 elicited by the facial stimuli correlated with EF in both groups (and negative valence with EF in the IAG).

Conclusions/Significance

Our results suggest that individuals with different attachment patterns process key emotional information and corresponding EF differently. This is evidenced by an early modulation of ERP components’ amplitudes, which are correlated with behavioral and neuropsychological effects. In brief, attachments patterns appear to impact multiple domains, such as emotional processing and EFs.  相似文献   

18.
Using a rapid serial visual presentation paradigm, we previously showed that the average amplitudes of six event-related potential (ERP) components were affected by different categories of emotional faces. In the current study, we investigated the six discriminating components on a single-trial level to clarify whether the amplitude difference between experimental conditions results from a difference in the real variability of single-trial amplitudes or from latency jitter across trials. It is found that there were consistent amplitude differences in the single-trial P1, N170, VPP, N3, and P3 components, demonstrating that a substantial proportion of the average amplitude differences can be explained by the pure variability in amplitudes on a single-trial basis between experimental conditions. These single-trial results verified the three-stage scheme of facial expression processing beyond multitrial ERP averaging, and showed the three processing stages of "fear popup", "emotional/unemotional discrimination", and "complete separation" based on the single-trial ERP dynamics.  相似文献   

19.
A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach.  相似文献   

20.
Repeated visual processing of an unfamiliar face suppresses neural activity in face-specific areas of the occipito-temporal cortex. This "repetition suppression" (RS) is a primitive mechanism involved in learning of unfamiliar faces, which can be detected through amplitude reduction of the N170 event-related potential (ERP). The dorsolateral prefrontal cortex (DLPFC) exerts top-down influence on early visual processing. However, its contribution to N170 RS and learning of unfamiliar faces remains unclear. Transcranial direct current stimulation (tDCS) transiently increases or decreases cortical excitability, as a function of polarity. We hypothesized that DLPFC excitability modulation by tDCS would cause polarity-dependent modulations of N170 RS during encoding of unfamiliar faces. tDCS-induced N170 RS enhancement would improve long-term recognition reaction time (RT) and/or accuracy rates, whereas N170 RS impairment would compromise recognition ability. Participants underwent three tDCS conditions in random order at ∼72 hour intervals: right anodal/left cathodal, right cathodal/left anodal and sham. Immediately following tDCS conditions, an EEG was recorded during encoding of unfamiliar faces for assessment of P100 and N170 visual ERPs. The P3a component was analyzed to detect prefrontal function modulation. Recognition tasks were administered ∼72 hours following encoding. Results indicate the right anodal/left cathodal condition facilitated N170 RS and induced larger P3a amplitudes, leading to faster recognition RT. Conversely, the right cathodal/left anodal condition caused N170 amplitude and RTs to increase, and a delay in P3a latency. These data demonstrate that DLPFC excitability modulation can influence early visual encoding of unfamiliar faces, highlighting the importance of DLPFC in basic learning mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号