首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Previous studies have shown that females and males differ in the processing of emotional facial expressions including the recognition of emotion, and that emotional facial expressions are detected more rapidly than are neutral expressions. However, whether the sexes differ in the rapid detection of emotional facial expressions remains unclear.

Methodology/Principal Findings

We measured reaction times (RTs) during a visual search task in which 44 females and 46 males detected normal facial expressions of anger and happiness or their anti-expressions within crowds of neutral expressions. Anti-expressions expressed neutral emotions with visual changes quantitatively comparable to normal expressions. We also obtained subjective emotional ratings in response to the facial expression stimuli. RT results showed that both females and males detected normal expressions more rapidly than anti-expressions and normal-angry expressions more rapidly than normal-happy expressions. However, females and males showed different patterns in their subjective ratings in response to the facial expressions. Furthermore, sex differences were found in the relationships between subjective ratings and RTs. High arousal was more strongly associated with rapid detection of facial expressions in females, whereas negatively valenced feelings were more clearly associated with the rapid detection of facial expressions in males.

Conclusion

Our data suggest that females and males differ in their subjective emotional reactions to facial expressions and in the emotional processes that modulate the detection of facial expressions.  相似文献   

2.

Background

With advances of research on fetal behavioural development, the question of whether we can identify fetal facial expressions and determine their developmental progression, takes on greater importance. In this study we investigate longitudinally the increasing complexity of combinations of facial movements from 24 to 36 weeks gestation in a sample of healthy fetuses using frame-by-frame coding of 4-D ultrasound scans. The primary aim was to examine whether these complex facial movements coalesce into a recognisable facial expression of pain/distress.

Methodology/Findings

Fifteen fetuses (8 girls, 7 boys) were observed four times in the second and third trimester of pregnancy. Fetuses showed significant progress towards more complex facial expressions as gestational age increased. Statistical analysis of the facial movements making up a specific facial configuration namely “pain/distress” also demonstrates that this facial expression becomes significantly more complete as the fetus matures.

Conclusions/Significance

The study shows that one can determine the normal progression of fetal facial movements. Furthermore, our results suggest that healthy fetuses progress towards an increasingly complete pain/distress expression as they mature. We argue that this is an adaptive process which is beneficial to the fetus postnatally and has the potential to identify normal versus abnormal developmental pathways.  相似文献   

3.

Background

Although ample evidence suggests that emotion and response inhibition are interrelated at the behavioral and neural levels, neural substrates of response inhibition to negative facial information remain unclear. Thus we used event-related potential (ERP) methods to explore the effects of explicit and implicit facial expression processing in response inhibition.

Methods

We used implicit (gender categorization) and explicit emotional Go/Nogo tasks (emotion categorization) in which neutral and sad faces were presented. Electrophysiological markers at the scalp and the voxel level were analyzed during the two tasks.

Results

We detected a task, emotion and trial type interaction effect in the Nogo-P3 stage. Larger Nogo-P3 amplitudes during sad conditions versus neutral conditions were detected with explicit tasks. However, the amplitude differences between the two conditions were not significant for implicit tasks. Source analyses on P3 component revealed that right inferior frontal junction (rIFJ) was involved during this stage. The current source density (CSD) of rIFJ was higher with sad conditions compared to neutral conditions for explicit tasks, rather than for implicit tasks.

Conclusions

The findings indicated that response inhibition was modulated by sad facial information at the action inhibition stage when facial expressions were processed explicitly rather than implicitly. The rIFJ may be a key brain region in emotion regulation.  相似文献   

4.

Background

The present study sought to clarify the relationship between empathy trait and attention responses to happy, angry, surprised, afraid, and sad facial expressions. As indices of attention, we recorded event-related potentials (ERP) and focused on N170 and late positive potential (LPP) components.

Methods

Twenty-two participants (12 males, 10 females) discriminated facial expressions (happy, angry, surprised, afraid, and sad) from emotionally neutral faces under an oddball paradigm. The empathy trait of participants was measured using the Interpersonal Reactivity Index (IRI, J Pers Soc Psychol 44:113–126, 1983).

Results

Participants with higher IRI scores showed: 1) more negative amplitude of N170 (140 to 200 ms) in the right posterior temporal area elicited by happy, angry, surprised, and afraid faces; 2) more positive amplitude of early LPP (300 to 600 ms) in the parietal area elicited in response to angry and afraid faces; and 3) more positive amplitude of late LPP (600 to 800 ms) in the frontal area elicited in response to happy, angry, surprised, afraid, and sad faces, compared to participants with lower IRI scores.

Conclusions

These results suggest that individuals with high empathy pay attention to various facial expressions more than those with low empathy, from very-early stage (reflected in N170) to late-stage (reflected in LPP) processing of faces.  相似文献   

5.

Background

In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied.

Methodology/Principal Findings

Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity.

Conclusions

Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.  相似文献   

6.

Background

Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials.

Methodology

Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets.

Conclusions

Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required.  相似文献   

7.

Background

The relationships between facial mimicry and subsequent psychological processes remain unclear. We hypothesized that the congruent facial muscle activity would elicit emotional experiences and that the experienced emotion would induce emotion recognition.

Methodology/Principal Findings

To test this hypothesis, we re-analyzed data collected in two previous studies. We recorded facial electromyography (EMG) from the corrugator supercilii and zygomatic major and obtained ratings on scales of valence and arousal for experienced emotions (Study 1) and for experienced and recognized emotions (Study 2) while participants viewed dynamic and static facial expressions of negative and positive emotions. Path analyses showed that the facial EMG activity consistently predicted the valence ratings for the emotions experienced in response to dynamic facial expressions. The experienced valence ratings in turn predicted the recognized valence ratings in Study 2.

Conclusion

These results suggest that facial mimicry influences the sharing and recognition of emotional valence in response to others'' dynamic facial expressions.  相似文献   

8.

Background

Impairments in facial mimicry are considered a proxy for deficits in affective empathy and have been demonstrated in 10 year old children and in adolescents with disruptive behavior disorder (DBD). However, it is not known whether these impairments are already present at an earlier age. Emotional deficits have also been shown in children with attention-deficit/hyperactivity disorder (ADHD).

Aims

To examine facial mimicry in younger, 6–7 year old children with DBD and with ADHD.

Methods

Electromyographic (EMG) activity in response to emotional facial expressions was recorded in 47 children with DBD, 18 children with ADHD and 35 healthy developing children.

Results

All groups displayed significant facial mimicry to the emotional expressions of other children. No group differences between children with DBD, children with ADHD and healthy developing children were found. In addition, no differences in facial mimicry were found between the clinical group (i.e., all children with a diagnosis) and the typically developing group in an analysis with ADHD symptoms as a covariate, and no differences were found between the clinical children and the typically developing children with DBD symptoms as a covariate.

Conclusion

Facial mimicry in children with DBD and ADHD throughout the first primary school years was unimpaired, in line with studies on empathy using other paradigms.  相似文献   

9.

Background

Evidence based largely on self-report data suggests that factors associated with medical education erode the critical human quality of empathy. These reports have caused serious concern among medical educators and clinicians and have led to changes in medical curricula around the world. This study aims to provide a more objective index of possible changes in empathy across the spectrum of clinical exposure, by using a behavioural test of empathic accuracy in addition to self-report questionnaires. Moreover, non-medical groups were used to control for maturation effects.

Methods

Three medical groups (N = 3×20) representing a spectrum of clinical exposure, and two non-medical groups (N = 2×20) matched for age, sex and educational achievements completed self-report measures of empathy, and tests of empathic accuracy and interoceptive sensitivity.

Results

Between-group differences in reported empathy related to maturation rather than clinical training/exposure. Conversely, analyses of the “eyes” test results specifically identified clinical practice, but not medical education, as the key influence on performance. The data from the interoception task did not support a link between visceral feedback and empathic processes.

Conclusions

Clinical practice, but not medical education, impacts on empathy development and seems instrumental in maintaining empathetic skills against the general trend of declining empathic accuracy with age.  相似文献   

10.

Background

Major depressive disorder (MDD) is associated with a mood-congruent processing bias in the amygdala toward face stimuli portraying sad expressions that is evident even when such stimuli are presented below the level of conscious awareness. The extended functional anatomical network that maintains this response bias has not been established, however.

Aims

To identify neural network differences in the hemodynamic response to implicitly presented facial expressions between depressed and healthy control participants.

Method

Unmedicated-depressed participants with MDD (n = 22) and healthy controls (HC; n = 25) underwent functional MRI as they viewed face stimuli showing sad, happy or neutral face expressions, presented using a backward masking design. The blood-oxygen-level dependent (BOLD) signal was measured to identify regions where the hemodynamic response to the emotionally valenced stimuli differed between groups.

Results

The MDD subjects showed greater BOLD responses than the controls to masked-sad versus masked-happy faces in the hippocampus, amygdala and anterior inferotemporal cortex. While viewing both masked-sad and masked-happy faces relative to masked-neutral faces, the depressed subjects showed greater hemodynamic responses than the controls in a network that included the medial and orbital prefrontal cortices and anterior temporal cortex.

Conclusions

Depressed and healthy participants showed distinct hemodynamic responses to masked-sad and masked-happy faces in neural circuits known to support the processing of emotionally valenced stimuli and to integrate the sensory and visceromotor aspects of emotional behavior. Altered function within these networks in MDD may establish and maintain illness-associated differences in the salience of sensory/social stimuli, such that attention is biased toward negative and away from positive stimuli.  相似文献   

11.

Background

Alcoholism is associated with abnormal anger processing. The purpose of this study was to investigate brain regions involved in the evaluation of angry facial expressions in patients with alcohol dependency.

Methods

Brain blood-oxygenation-level-dependent (BOLD) responses to angry faces were measured and compared between patients with alcohol dependency and controls.

Results

During intensity ratings of angry faces, significant differences in BOLD were observed between patients with alcohol dependency and controls. That is, patients who were alcohol-dependent showed significantly greater activation in several brain regions, including the dorsal anterior cingulate cortex (dACC) and medial prefrontal cortex (MPFC).

Conclusions

Following exposure to angry faces, abnormalities in dACC and MPFC activation in patients with alcohol dependency indicated possible inefficiencies or hypersensitivities in social cognitive processing.  相似文献   

12.

Background

Previous research has demonstrated that pain-related fear can be acquired through observation of another’s pain behaviour during an encounter with a painful stimulus. The results of two experimental studies were presented, each with a different pain stimulus, of which the aim was to investigate the effect of observational learning on pain expectancies, avoidance behaviour, and physiological responding. Additionally, the study investigated whether certain individuals are at heightened risk to develop pain-related fear through observation. Finally, changes in pain-related fear and pain intensity after exposure to the feared stimulus were examined.

Methods

During observational acquisition, healthy female participants watched a video showing coloured cold metal bars being placed against the neck of several models. In a differential fear conditioning paradigm, one colour was paired with painful facial expressions, and another colour was paired with neutral facial expressions of the video models. During exposure, both metal bars with equal temperatures (-25° or +8° Celsius) were placed repeatedly against participants’ own neck.

Results

Results showed that pain-related beliefs can be acquired by observing pain in others, but do not necessarily cause behavioural changes. Additionally, dispositional empathy might play a role in the acquisition of these beliefs. Furthermore, skin conductance responses were higher when exposed to the pain-associated bar, but only in one of two experiments. Differential pain-related beliefs rapidly disappeared after first-hand exposure to the stimuli.

Conclusions

This study enhances our understanding of pain-related fear acquisition and subsequent exposure to the feared stimulus, providing leads for pain prevention and management strategies.  相似文献   

13.
Lee TH  Choi JS  Cho YS 《PloS one》2012,7(3):e32987

Background

Certain facial configurations are believed to be associated with distinct affective meanings (i.e. basic facial expressions), and such associations are common across cultures (i.e. universality of facial expressions). However, recently, many studies suggest that various types of contextual information, rather than facial configuration itself, are important factor for facial emotion perception.

Methodology/Principal Findings

To examine systematically how contextual information influences individuals’ facial emotion perception, the present study estimated direct observers’ perceptual thresholds for detecting negative facial expressions via a forced-choice psychophysical procedure using faces embedded in various emotional contexts. We additionally measured the individual differences in affective information-processing tendency (BIS/BAS) as a possible factor that may determine the extent to which contextual information on facial emotion perception is used. It was found that contextual information influenced observers'' perceptual thresholds for facial emotion. Importantly, individuals’ affective-information tendencies modulated the extent to which they incorporated context information into their facial emotion perceptions.

Conclusions/Significance

The findings of this study suggest that facial emotion perception not only depends on facial configuration, but the context in which the face appears as well. This contextual influence appeared differently with individual’s characteristics of information processing. In summary, we conclude that individual character traits, as well as facial configuration and the context in which a face appears, need to be taken into consideration regarding facial emotional perception.  相似文献   

14.
Chiew KS  Braver TS 《PloS one》2011,6(3):e17635

Background

Neural systems underlying conflict processing have been well studied in the cognitive realm, but the extent to which these overlap with those underlying emotional conflict processing remains unclear. A novel adaptation of the AX Continuous Performance Task (AX-CPT), a stimulus-response incompatibility paradigm, was examined that permits close comparison of emotional and cognitive conflict conditions, through the use of affectively-valenced facial expressions as the response modality.

Methodology/Principal Findings

Brain activity was monitored with functional magnetic resonance imaging (fMRI) during performance of the emotional AX-CPT. Emotional conflict was manipulated on a trial-by-trial basis, by requiring contextually pre-cued facial expressions to emotional probe stimuli (IAPS images) that were either affectively compatible (low-conflict) or incompatible (high-conflict). The emotion condition was contrasted against a matched cognitive condition that was identical in all respects, except that probe stimuli were emotionally neutral. Components of the brain cognitive control network, including dorsal anterior cingulate cortex (ACC) and lateral prefrontal cortex (PFC), showed conflict-related activation increases in both conditions, but with higher activity during emotion conditions. In contrast, emotion conflict effects were not found in regions associated with affective processing, such as rostral ACC.

Conclusions/Significance

These activation patterns provide evidence for a domain-general neural system that is active for both emotional and cognitive conflict processing. In line with previous behavioural evidence, greatest activity in these brain regions occurred when both emotional and cognitive influences additively combined to produce increased interference.  相似文献   

15.

Background

Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups.

Methodology/Principal Findings

Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition.

Conclusions/Significance

Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.  相似文献   

16.

Background

The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.

Methodology

Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.

Principal Findings

Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca''s area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.

Conclusions

Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.

Significance

Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.  相似文献   

17.

Background

Current behaviour-based pain assessments for laboratory rodents have significant limitations. Assessment of facial expression changes, as a novel means of pain scoring, may overcome some of these limitations. The Mouse Grimace Scale appears to offer a means of assessing post-operative pain in mice that is as effective as manual behavioural-based scoring, without the limitations of such schemes. Effective assessment of post-operative pain is not only critical for animal welfare, but also the validity of science using animal models.

Methodology/Principal Findings

This study compared changes in behaviour assessed using both an automated system (“HomeCageScan”) and using manual analysis with changes in facial expressions assessed using the Mouse Grimace Scale (MGS). Mice (n = 6/group) were assessed before and after surgery (scrotal approach vasectomy) and either received saline, meloxicam or bupivacaine. Both the MGS and manual scoring of pain behaviours identified clear differences between the pre and post surgery periods and between those animals receiving analgesia (20 mg/kg meloxicam or 5 mg/kg bupivacaine) or saline post-operatively. Both of these assessments were highly correlated with those showing high MGS scores also exhibiting high frequencies of pain behaviours. Automated behavioural analysis in contrast was only able to detect differences between the pre and post surgery periods.

Conclusions

In conclusion, both the Mouse Grimace Scale and manual scoring of pain behaviours are assessing the presence of post-surgical pain, whereas automated behavioural analysis could be detecting surgical stress and/or post-surgical pain. This study suggests that the Mouse Grimace Scale could prove to be a quick and easy means of assessing post-surgical pain, and the efficacy of analgesic treatment in mice that overcomes some of the limitations of behaviour-based assessment schemes.  相似文献   

18.

Background

Little is known about the neural basis of elite performers and their optimal performance in extreme environments. The purpose of this study was to examine brain processing differences between elite warfighters and comparison subjects in brain structures that are important for emotion processing and interoception.

Methodology/Principal Findings

Navy Sea, Air, and Land Forces (SEALs) while off duty (n = 11) were compared with n = 23 healthy male volunteers while performing a simple emotion face-processing task during functional magnetic resonance imaging. Irrespective of the target emotion, elite warfighters relative to comparison subjects showed relatively greater right-sided insula, but attenuated left-sided insula, activation. Navy SEALs showed selectively greater activation to angry target faces relative to fearful or happy target faces bilaterally in the insula. This was not accounted for by contrasting positive versus negative emotions. Finally, these individuals also showed slower response latencies to fearful and happy target faces than did comparison subjects.

Conclusions/Significance

These findings support the hypothesis that elite warfighters deploy greater processing resources toward potential threat-related facial expressions and reduced processing resources to non-threat-related facial expressions. Moreover, rather than expending more effort in general, elite warfighters show more focused neural and performance tuning. In other words, greater neural processing resources are directed toward threat stimuli and processing resources are conserved when facing a nonthreat stimulus situation.  相似文献   

19.

Background

Fetal facial development is essential not only for postnatal bonding between parents and child, but also theoretically for the study of the origins of affect. However, how such movements become coordinated is poorly understood. 4-D ultrasound visualisation allows an objective coding of fetal facial movements.

Methodology/Findings

Based on research using facial muscle movements to code recognisable facial expressions in adults and adapted for infants, we defined two distinct fetal facial movements, namely “cry-face-gestalt” and “laughter- gestalt,” both made up of up to 7 distinct facial movements. In this conceptual study, two healthy fetuses were then scanned at different gestational ages in the second and third trimester. We observed that the number and complexity of simultaneous movements increased with gestational age. Thus, between 24 and 35 weeks the mean number of co-occurrences of 3 or more facial movements increased from 7% to 69%. Recognisable facial expressions were also observed to develop. Between 24 and 35 weeks the number of co-occurrences of 3 or more movements making up a “cry-face gestalt” facial movement increased from 0% to 42%. Similarly the number of co-occurrences of 3 or more facial movements combining to a “laughter-face gestalt” increased from 0% to 35%. These changes over age were all highly significant.

Significance

This research provides the first evidence of developmental progression from individual unrelated facial movements toward fetal facial gestalts. We propose that there is considerable potential of this method for assessing fetal development: Subsequent discrimination of normal and abnormal fetal facial development might identify health problems in utero.  相似文献   

20.
Yang Z  Zhao J  Jiang Y  Li C  Wang J  Weng X  Northoff G 《PloS one》2011,6(7):e21881

Objective

Major depressive disorder (MDD) has been characterized by abnormalities in emotional processing. However, what remains unclear is whether MDD also shows deficits in the unconscious processing of either positive or negative emotions. We conducted a psychological study in healthy and MDD subjects to investigate unconscious emotion processing and its valence-specific alterations in MDD patients.

Methods

We combined a well established paradigm for unconscious visual processing, the continuous flash suppression, with positive and negative emotional valences to detect the attentional preference evoked by the invisible emotional facial expressions.

Results

Healthy subjects showed an attentional bias for negative emotions in the unconscious condition while this valence bias remained absent in MDD patients. In contrast, this attentional bias diminished in the conscious condition for both healthy subjects and MDD.

Conclusion

Our findings demonstrate for the first time valence-specific deficits specifically in the unconscious processing of emotions in MDD; this may have major implications for subsequent neurobiological investigations as well as for clinical diagnosis and therapy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号