首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Human observers are remarkably proficient at recognizing expressions of emotions and at readily grouping them into distinct categories. When morphing one facial expression into another, the linear changes in low-level features are insufficient to describe the changes in perception, which instead follow an s-shaped function. Important questions are, whether there are single diagnostic regions in the face that drive categorical perception for certain parings of emotion expressions, and how information in those regions interacts when presented together. We report results from two experiments with morphed fear-anger expressions, where (a) half of the face was masked or (b) composite faces made up of different expressions were presented. When isolated upper and lower halves of faces were shown, the eyes were found to be almost as diagnostic as the whole face, with the response function showing a steep category boundary. In contrast, the mouth allowed for a substantially lesser amount of accuracy and responses followed a much flatter psychometric function. When a composite face consisting of mismatched upper and lower halves was used and observers were instructed to exclusively judge either the expression of mouth or eyes, the to-be-ignored part always influenced perception of the target region. In line with experiment 1, the eye region exerted a much stronger influence on mouth judgements than vice versa. Again, categorical perception was significantly more pronounced for upper halves of faces. The present study shows that identification of fear and anger in morphed faces relies heavily on information from the upper half of the face, most likely the eye region. Categorical perception is possible when only the upper face half is present, but compromised when only the lower part is shown. Moreover, observers tend to integrate all available features of a face, even when trying to focus on only one part.  相似文献   

2.
Psychophysiological experiments were performed on 34 healthy subjects. We analyzed the accuracy and latency of motor response in recognizing two types of complex visual stimuli, animals and objects, which were presented immediately after a brief presentation of face images with different emotional expressions: anger, fear, happiness, and a neutral expression. We revealed the dependence of response latency on emotional expression of the masked face. The response latency was lower when the test stimuli were preceded by angry or fearful faces compared to happy or neutral faces. These effects depended on the type of stimulus and were more expressive when recognizing objects compared to animals. We found that the effects of emotional faces were related to personal features of the subjects that they exhibited in the emotional and communicative blocks of Cattell’s test and were more expressive in more sensitive, anxious, and pessimistic introverts. The mechanisms of the effects of unconsciously perceived emotional information on human visual behavior are discussed.  相似文献   

3.
Electroencephalography (EEG) has been extensively used in studies of the frontal asymmetry of emotion and motivation. This study investigated the midfrontal EEG activation, heart rate and skin conductance during an emotional face analog of the Stroop task, in anxious and non-anxious participants. In this task, the participants were asked to identify the expression of calm, fearful and happy faces that had either a congruent or incongruent emotion name written across them. Anxious participants displayed a cognitive bias characterized by facilitated attentional engagement with fearful faces. Fearful face trials induced greater relative right frontal activation, whereas happy face trials induced greater relative left frontal activation. Moreover, anxiety specifically modulated the magnitude of the right frontal activation to fearful faces, which also correlated with the cognitive bias. Therefore, these results show that frontal EEG activation asymmetry reflects the bias toward facilitated processing of fearful faces in anxiety.  相似文献   

4.
Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer''s own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.  相似文献   

5.
Facial expressions are important social communicators. In addition to communicating social information, the specific muscular movements of expressions may serve additional functional roles. For example, recalibration theory hypothesizes that the anger expression exaggerates facial cues of strength, an indicator of human fighting ability, to increase bargaining power in conflicts. Supporting this theory is evidence that faces displaying one element of an angry expression (e.g. lowered eyebrows) are perceived to be stronger than faces with opposite expression features (e.g. raised eyebrows for fear). The present study sought stronger evidence that more natural manipulations of facial anger also enhance perceived strength. We used expression aftereffects to bias perception of a neutral face towards anger and observed the effects on perceptions of strength. In addition, we tested the specificity of the strength-cue enhancing effect by examining whether two other expressions, fear and happy, also affected perceptions of strength. We found that, as predicted, a face biased to be perceived as angrier was rated as stronger compared to a baseline rating, whereas a face biased to be more fearful was rated as weaker, consistent with the purported function of fear as an act of submission. Interestingly, faces biased towards a happy expression were also perceived as stronger, though the effect was smaller than that for anger. Overall, the results supported the recalibration theory hypothesis that the anger expression enhances cues of strength to increase bargaining power in conflicts, but with some limitations regarding the specificity of the function to anger.  相似文献   

6.
Facial expressions of emotion play a key role in guiding social judgements, including deciding whether or not to approach another person. However, no research has examined how situational context modulates approachability judgements assigned to emotional faces, or the relationship between perceived threat and approachability judgements. Fifty-two participants provided approachability judgements to angry, disgusted, fearful, happy, neutral, and sad faces across three situational contexts: no context, when giving help, and when receiving help. Participants also rated the emotional faces for level of perceived threat and labelled the facial expressions. Results indicated that context modulated approachability judgements to faces depicting negative emotions. Specifically, faces depicting distress-related emotions (i.e., sadness and fear) were considered more approachable in the giving help context than both the receiving help and neutral context. Furthermore, higher ratings of threat were associated with the assessment of angry, happy and neutral faces as less approachable. These findings are the first to demonstrate the significant role that context plays in the evaluation of an individual’s approachability and illustrate the important relationship between perceived threat and the evaluation of approachability.  相似文献   

7.
Skin conductance responses (SCR) measure objective arousal in response to emotionally-relevant stimuli. Central nervous system influence on SCR is exerted differentially by the two hemispheres. Differences between SCR recordings from the left and right hands may therefore be expected. This study focused on emotionally expressive faces, known to be processed differently by the two hemispheres. Faces depicting neutral, happy, sad, angry, fearful or disgusted expressions were presented in two tasks, one with an explicit emotion judgment and the other with an age judgment. We found stronger responses to sad and happy faces compared with neutral from the left hand during the implicit task, and stronger responses to negative emotions compared with neutral from the right hand during the explicit task. Our results suggest that basic social stimuli generate distinct responses on the two hands, no doubt related to the lateralization of social function in the brain.  相似文献   

8.
Chemosensory communication of anxiety is a common phenomenon in vertebrates and improves perceptual and responsive behaviour in the perceiver in order to optimize ontogenetic survival. A few rating studies reported a similar phenomenon in humans. Here, we investigated whether subliminal face perception changes in the context of chemosensory anxiety signals. Axillary sweat samples were taken from 12 males while they were waiting for an academic examination and while exercising ergometric training some days later. 16 subjects (eight females) participated in an emotional priming study, using happy, fearful and sad facial expressions as primes (11.7 ms) and neutral faces as targets (47 ms). The pooled chemosensory samples were presented before and during picture presentation (920 ms). In the context of chemosensory stimuli derived from sweat samples taken during the sport condition, subjects judged the targets significantly more positive when they were primed by a happy face than when they were primed by the negative facial expressions (P = 0.02). In the context of the chemosensory anxiety signals, the priming effect of the happy faces was diminished in females (P = 0.02), but not in males. It is discussed whether, in socially relevant ambiguous perceptual conditions, chemosensory signals have a processing advantage and dominate visual signals or whether fear signals in general have a stronger behavioural impact than positive signals.  相似文献   

9.
We used event-related fMRI to assess whether brain responses to fearful versus neutral faces are modulated by spatial attention. Subjects performed a demanding matching task for pairs of stimuli at prespecified locations, in the presence of task-irrelevant stimuli at other locations. Faces or houses unpredictably appeared at the relevant or irrelevant locations, while the faces had either fearful or neutral expressions. Activation of fusiform gyri by faces was strongly affected by attentional condition, but the left amygdala response to fearful faces was not. Right fusiform activity was greater for fearful than neutral faces, independently of the attention effect on this region. These results reveal differential influences on face processing from attention and emotion, with the amygdala response to threat-related expressions unaffected by a manipulation of attention that strongly modulates the fusiform response to faces.  相似文献   

10.
时间知觉是人类的一项基本能力.日常生活经验表明时间知觉容易受到情绪的影响.但是在前人的研究中,这些影响往往伴随着主动注意和外显的运动反应.这里关注的是不伴随主动注意和外显运动反应的内隐时间知觉是否受到情绪面孔的影响.被试在主动完成一个由情绪面孔组成的视觉辨别任务的同时,被动地听一系列声音刺激.声音刺激的刺激启动异步时间(stimulus onset asynchrony,SOA)中,80%是标准SOA(800ms),20%是偏差SOA(400,600 ms).对频繁出现的标准SOA和偶尔出现的偏差SOA诱发的事件相关电位(event-related potential,ERP)进行记录.2个短的偏差SOA(400和600ms)引发了2个变化相关的ERP成分:失匹配负波(the mismatch negativity,MMN)和P3a.代表对无规律变化早期检测的MMN波幅受到了情绪面孔的影响.与愉快和中性面孔相比,恐惧面孔降低了MMN波幅.对于400ms偏差SOA,与恐惧面孔和中性面孔相比,愉快面孔增加了P3a波幅.该ERP研究提示听觉通道的内隐时间知觉受到情绪面孔的影响,恐惧面孔降低了内隐时间知觉的准确性.  相似文献   

11.
Processing of unattended threat-related stimuli, such as fearful faces, has been previously examined using group functional magnetic resonance (fMRI) approaches. However, the identification of features of brain activity containing sufficient information to decode, or "brain-read", unattended (implicit) fear perception remains an active research goal. Here we test the hypothesis that patterns of large-scale functional connectivity (FC) decode the emotional expression of implicitly perceived faces within single individuals using training data from separate subjects. fMRI and a blocked design were used to acquire BOLD signals during implicit (task-unrelated) presentation of fearful and neutral faces. A pattern classifier (linear kernel Support Vector Machine, or SVM) with linear filter feature selection used pair-wise FC as features to predict the emotional expression of implicitly presented faces. We plotted classification accuracy vs. number of top N selected features and observed that significantly higher than chance accuracies (between 90-100%) were achieved with 15-40 features. During fearful face presentation, the most informative and positively modulated FC was between angular gyrus and hippocampus, while the greatest overall contributing region was the thalamus, with positively modulated connections to bilateral middle temporal gyrus and insula. Other FCs that predicted fear included superior-occipital and parietal regions, cerebellum and prefrontal cortex. By comparison, patterns of spatial activity (as opposed to interactivity) were relatively uninformative in decoding implicit fear. These findings indicate that whole-brain patterns of interactivity are a sensitive and informative signature of unattended fearful emotion processing. At the same time, we demonstrate and propose a sensitive and exploratory approach for the identification of large-scale, condition-dependent FC. In contrast to model-based, group approaches, the current approach does not discount the multivariate, joint responses of multiple functional connections and is not hampered by signal loss and the need for multiple comparisons correction.  相似文献   

12.
Cognitive theories of depression posit that perception is negatively biased in depressive disorder. Previous studies have provided empirical evidence for this notion, but left open the question whether the negative perceptual bias reflects a stable trait or the current depressive state. Here we investigated the stability of negatively biased perception over time. Emotion perception was examined in patients with major depressive disorder (MDD) and healthy control participants in two experiments. In the first experiment subjective biases in the recognition of facial emotional expressions were assessed. Participants were presented with faces that were morphed between sad and neutral and happy expressions and had to decide whether the face was sad or happy. The second experiment assessed automatic emotion processing by measuring the potency of emotional faces to gain access to awareness using interocular suppression. A follow-up investigation using the same tests was performed three months later. In the emotion recognition task, patients with major depression showed a shift in the criterion for the differentiation between sad and happy faces: In comparison to healthy controls, patients with MDD required a greater intensity of the happy expression to recognize a face as happy. After three months, this negative perceptual bias was reduced in comparison to the control group. The reduction in negative perceptual bias correlated with the reduction of depressive symptoms. In contrast to previous work, we found no evidence for preferential access to awareness of sad vs. happy faces. Taken together, our results indicate that MDD-related perceptual biases in emotion recognition reflect the current clinical state rather than a stable depressive trait.  相似文献   

13.
Yang J  Xu X  Du X  Shi C  Fang F 《PloS one》2011,6(2):e14641
Emotional stimuli can be processed even when participants perceive them without conscious awareness, but the extent to which unconsciously processed emotional stimuli influence implicit memory after short and long delays is not fully understood. We addressed this issue by measuring a subliminal affective priming effect in Experiment 1 and a long-term priming effect in Experiment 2. In Experiment 1, a flashed fearful or neutral face masked by a scrambled face was presented three times, then a target face (either fearful or neutral) was presented and participants were asked to make a fearful/neutral judgment. We found that, relative to a neutral prime face (neutral-fear face), a fearful prime face speeded up participants' reaction to a fearful target (fear-fear face), when they were not aware of the masked prime face. But this response pattern did not apply to the neutral target. In Experiment 2, participants were first presented with a masked faces six times during encoding. Three minutes later, they were asked to make a fearful/neutral judgment for the same face with congruent expression, the same face with incongruent expression or a new face. Participants showed a significant priming effect for the fearful faces but not for the neutral faces, regardless of their awareness of the masked faces during encoding. These results provided evidence that unconsciously processed stimuli could enhance emotional memory after both short and long delays. It indicates that emotion can enhance memory processing whether the stimuli are encoded consciously or unconsciously.  相似文献   

14.
E Scheller  C Büchel  M Gamer 《PloS one》2012,7(7):e41792
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.  相似文献   

15.
To investigate the role of experience in humans’ perception of emotion using canine visual signals, we asked adults with various levels of dog experience to interpret the emotions of dogs displayed in videos. The video stimuli had been pre-categorized by an expert panel of dog behavior professionals as showing examples of happy or fearful dog behavior. In a sample of 2,163 participants, the level of dog experience strongly predicted identification of fearful, but not of happy, emotional examples. The probability of selecting the “fearful” category to describe fearful examples increased with experience and ranged from.30 among those who had never lived with a dog to greater than.70 among dog professionals. In contrast, the probability of selecting the “happy” category to describe happy emotional examples varied little by experience, ranging from.90 to.93. In addition, the number of physical features of the dog that participants reported using for emotional interpretations increased with experience, and in particular, more-experienced respondents were more likely to attend to the ears. Lastly, more-experienced respondents provided lower difficulty and higher accuracy self-ratings than less-experienced respondents when interpreting both happy and fearful emotional examples. The human perception of emotion in other humans has previously been shown to be sensitive to individual differences in social experience, and the results of the current study extend the notion of experience-dependent processes from the intraspecific to the interspecific domain.  相似文献   

16.
Seeing fearful body expressions activates the fusiform cortex and amygdala   总被引:8,自引:0,他引:8  
Darwin's evolutionary approach to organisms' emotional states attributes a prominent role to expressions of emotion in whole-body actions. Researchers in social psychology [1,2] and human development [3] have long emphasized the fact that emotional states are expressed through body movement, but cognitive neuroscientists have almost exclusively considered isolated facial expressions (for review, see [4]). Here we used high-field fMRI to determine the underlying neural mechanisms of perception of body expression of emotion. Subjects were presented with short blocks of body expressions of fear alternating with short blocks of emotionally neutral meaningful body gestures. All images had internal facial features blurred out to avoid confounds due to a face or facial expression. We show that exposure to body expressions of fear, as opposed to neutral body postures, activates the fusiform gyrus and the amygdala. The fact that these two areas have previously been associated with the processing of faces and facial expressions [5-8] suggests synergies between facial and body-action expressions of emotion. Our findings open a new area of investigation of the role of body expressions of emotion in adaptive behavior as well as the relation between processes of emotion recognition in the face and in the body.  相似文献   

17.
It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly.  相似文献   

18.
The Autobiographical Emotional Memory Task (AEMT), which involves recalling and writing about intense emotional experiences, is a widely used method to experimentally induce emotions. The validity of this method depends upon the extent to which it can induce specific desired emotions (intended emotions), while not inducing any other (incidental) emotions at different levels across one (or more) conditions. A review of recent studies that used this method indicated that most studies exclusively monitor post-writing ratings of the intended emotions, without assessing the possibility that the method may have differentially induced other incidental emotions as well. We investigated the extent of this issue by collecting both pre- and post-writing ratings of incidental emotions in addition to the intended emotions. Using methods largely adapted from previous studies, participants were assigned to write about a profound experience of anger or fear (Experiment 1) or happiness or sadness (Experiment 2). In line with previous research, results indicated that intended emotions (anger and fear) were successfully induced in the respective conditions in Experiment 1. However, disgust and sadness were also induced while writing about an angry experience compared to a fearful experience. Similarly, although happiness and sadness were induced in the appropriate conditions, Experiment 2 indicated that writing about a sad experience also induced disgust, fear, and anger, compared to writing about a happy experience. Possible resolutions to avoid the limitations of the AEMT to induce specific discrete emotions are discussed.  相似文献   

19.
Pell MD  Kotz SA 《PloS one》2011,6(11):e27256
How quickly do listeners recognize emotions from a speaker''s voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.  相似文献   

20.
Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号