首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one’s own face to assimilate another person’s face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer’s motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant’s own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other’s face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.  相似文献   

2.
E Scheller  C Büchel  M Gamer 《PloS one》2012,7(7):e41792
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.  相似文献   

3.
The present study tested whether neural sensitivity to salient emotional facial expressions was influenced by emotional expectations induced by a cue that validly predicted the expression of a subsequently presented target face. Event-related potentials (ERPs) elicited by fearful and neutral faces were recorded while participants performed a gender discrimination task under cued (‘expected’) and uncued (‘unexpected’) conditions. The behavioral results revealed that accuracy was lower for fearful compared with neutral faces in the unexpected condition, while accuracy was similar for fearful and neutral faces in the expected condition. ERP data revealed increased amplitudes in the P2 component and 200–250 ms interval for unexpected fearful versus neutral faces. By contrast, ERP responses were similar for fearful and neutral faces in the expected condition. These findings indicate that human neural sensitivity to fearful faces is modulated by emotional expectations. Although the neural system is sensitive to unpredictable emotionally salient stimuli, sensitivity to salient stimuli is reduced when these stimuli are predictable.  相似文献   

4.
Chemosensory communication of anxiety is a common phenomenon in vertebrates and improves perceptual and responsive behaviour in the perceiver in order to optimize ontogenetic survival. A few rating studies reported a similar phenomenon in humans. Here, we investigated whether subliminal face perception changes in the context of chemosensory anxiety signals. Axillary sweat samples were taken from 12 males while they were waiting for an academic examination and while exercising ergometric training some days later. 16 subjects (eight females) participated in an emotional priming study, using happy, fearful and sad facial expressions as primes (11.7 ms) and neutral faces as targets (47 ms). The pooled chemosensory samples were presented before and during picture presentation (920 ms). In the context of chemosensory stimuli derived from sweat samples taken during the sport condition, subjects judged the targets significantly more positive when they were primed by a happy face than when they were primed by the negative facial expressions (P = 0.02). In the context of the chemosensory anxiety signals, the priming effect of the happy faces was diminished in females (P = 0.02), but not in males. It is discussed whether, in socially relevant ambiguous perceptual conditions, chemosensory signals have a processing advantage and dominate visual signals or whether fear signals in general have a stronger behavioural impact than positive signals.  相似文献   

5.
Facial expressions are important social communicators. In addition to communicating social information, the specific muscular movements of expressions may serve additional functional roles. For example, recalibration theory hypothesizes that the anger expression exaggerates facial cues of strength, an indicator of human fighting ability, to increase bargaining power in conflicts. Supporting this theory is evidence that faces displaying one element of an angry expression (e.g. lowered eyebrows) are perceived to be stronger than faces with opposite expression features (e.g. raised eyebrows for fear). The present study sought stronger evidence that more natural manipulations of facial anger also enhance perceived strength. We used expression aftereffects to bias perception of a neutral face towards anger and observed the effects on perceptions of strength. In addition, we tested the specificity of the strength-cue enhancing effect by examining whether two other expressions, fear and happy, also affected perceptions of strength. We found that, as predicted, a face biased to be perceived as angrier was rated as stronger compared to a baseline rating, whereas a face biased to be more fearful was rated as weaker, consistent with the purported function of fear as an act of submission. Interestingly, faces biased towards a happy expression were also perceived as stronger, though the effect was smaller than that for anger. Overall, the results supported the recalibration theory hypothesis that the anger expression enhances cues of strength to increase bargaining power in conflicts, but with some limitations regarding the specificity of the function to anger.  相似文献   

6.
Facial expressions of emotion play a key role in guiding social judgements, including deciding whether or not to approach another person. However, no research has examined how situational context modulates approachability judgements assigned to emotional faces, or the relationship between perceived threat and approachability judgements. Fifty-two participants provided approachability judgements to angry, disgusted, fearful, happy, neutral, and sad faces across three situational contexts: no context, when giving help, and when receiving help. Participants also rated the emotional faces for level of perceived threat and labelled the facial expressions. Results indicated that context modulated approachability judgements to faces depicting negative emotions. Specifically, faces depicting distress-related emotions (i.e., sadness and fear) were considered more approachable in the giving help context than both the receiving help and neutral context. Furthermore, higher ratings of threat were associated with the assessment of angry, happy and neutral faces as less approachable. These findings are the first to demonstrate the significant role that context plays in the evaluation of an individual’s approachability and illustrate the important relationship between perceived threat and the evaluation of approachability.  相似文献   

7.
Emotional signals are perceived whether or not we are aware of it. The evidence so far mostly came from studies with facial expressions. Here, we investigated whether the pattern of non-conscious face expression perception is found for whole body expressions. Continuous flash suppression (CFS) was used to measure the time for neutral, fearful, and angry facial or bodily expressions to break from suppression. We observed different suppression time patterns for emotions depending on whether the stimuli were faces or bodies. The suppression time for anger was shortest for bodily expressions, but longest for the facial expressions. This pattern indicates different processing and detection mechanisms for faces and bodies outside awareness, and suggests that awareness mechanisms associated with dorsal structures might play a role in becoming conscious of angry bodily expressions.  相似文献   

8.
A two-process probabilistic theory of emotion perception based on a non-linear combination of facial features is presented. Assuming that the upper and the lower part of the face function as the building blocks at the basis of emotion perception, an empirical test is provided with fear and happiness as target emotions. Subjects were presented with prototypical fearful and happy faces and with computer-generated chimerical expressions that were a combination of happy and fearful. Subjects were asked to indicate the emotions they perceive using an extensive list of emotions. We show that some emotions require a conjunction of the two halves of a face to be perceived, whereas for some other emotions only one half is sufficient. We demonstrate that chimerical faces give rise to the perception of genuine emotions. The findings provide evidence that different combinations of the two halves of a fearful and a happy face, either congruent or not, do generate the perception of emotions other than fear and happiness.  相似文献   

9.
In a dual-task paradigm, participants performed a spatial location working memory task and a forced two-choice perceptual decision task (neutral vs. fearful) with gradually morphed emotional faces (neutral ∼ fearful). Task-irrelevant word distractors (negative, neutral, and control) were experimentally manipulated during spatial working memory encoding. We hypothesized that, if affective perception is influenced by concurrent cognitive load using a working memory task, task-irrelevant emotional distractors would bias subsequent perceptual decision-making on ambiguous facial expression. We found that when either neutral or negative emotional words were presented as task-irrelevant working-memory distractors, participants more frequently reported fearful face perception - but only at the higher emotional intensity levels of morphed faces. Also, the affective perception bias due to negative emotional distractors correlated with a decrease in working memory performance. Taken together, our findings suggest that concurrent working memory load by task-irrelevant distractors has an impact on affective perception of facial expressions.  相似文献   

10.
Rapid detection of evolutionarily relevant threats (e.g., fearful faces) is important for human survival. The ability to rapidly detect fearful faces exhibits high variability across individuals. The present study aimed to investigate the relationship between behavioral detection ability and brain activity, using both event-related potential (ERP) and event-related oscillation (ERO) measurements. Faces with fearful or neutral facial expressions were presented for 17 ms or 200 ms in a backward masking paradigm. Forty-two participants were required to discriminate facial expressions of the masked faces. The behavioral sensitivity index d'' showed that the detection ability to rapidly presented and masked fearful faces varied across participants. The ANOVA analyses showed that the facial expression, hemisphere, and presentation duration affected the grand-mean ERP (N1, P1, and N170) and ERO (below 20 Hz and lasted from 100 ms to 250 ms post-stimulus, mainly in theta band) brain activity. More importantly, the overall detection ability of 42 subjects was significantly correlated with the emotion effect (i.e., fearful vs. neutral) on ERP (r = 0.403) and ERO (r = 0.552) measurements. A higher d'' value was corresponding to a larger size of the emotional effect (i.e., fearful – neutral) of N170 amplitude and a larger size of the emotional effect of the specific ERO spectral power at the right hemisphere. The present results suggested a close link between behavioral detection ability and the N170 amplitude as well as the ERO spectral power below 20 Hz in individuals. The emotional effect size between fearful and neutral faces in brain activity may reflect the level of conscious awareness of fearful faces.  相似文献   

11.
The present study investigates the relationship between inter-individual differences in fearful face recognition and amygdala volume. Thirty normal adults were recruited and each completed two identical facial expression recognition tests offline and two magnetic resonance imaging (MRI) scans. Linear regression indicated that the left amygdala volume negatively correlated with the accuracy of recognizing fearful facial expressions and positively correlated with the probability of misrecognizing fear as surprise. Further exploratory analyses revealed that this relationship did not exist for any other subcortical or cortical regions. Nor did such a relationship exist between the left amygdala volume and performance recognizing the other five facial expressions. These mind-brain associations highlight the importance of the amygdala in recognizing fearful faces and provide insights regarding inter-individual differences in sensitivity toward fear-relevant stimuli.  相似文献   

12.
It is well known that emotion can modulate attentional processes. Previous studies have shown that even under restricted awareness, emotional facial expressions (especially threat-related) can guide the direction of spatial attention. However, it remains unclear whether emotional facial expressions under restricted awareness can affect temporal attention. To address this issue, we used a modified attentional blink (AB) paradigm in which masked (Experiment 1) or unmasked (Experiment 2) emotional faces (fearful or neutral) were presented before the AB sequence. We found that, in comparison with neutral faces, masked fearful faces significantly decreased the AB magnitude (Experiment 1), whereas unmasked fearful faces significantly increased the AB magnitude (Experiment 2). These results indicate that effects of emotional expression on the AB are modulated by the level of awareness.  相似文献   

13.
We used event-related fMRI to assess whether brain responses to fearful versus neutral faces are modulated by spatial attention. Subjects performed a demanding matching task for pairs of stimuli at prespecified locations, in the presence of task-irrelevant stimuli at other locations. Faces or houses unpredictably appeared at the relevant or irrelevant locations, while the faces had either fearful or neutral expressions. Activation of fusiform gyri by faces was strongly affected by attentional condition, but the left amygdala response to fearful faces was not. Right fusiform activity was greater for fearful than neutral faces, independently of the attention effect on this region. These results reveal differential influences on face processing from attention and emotion, with the amygdala response to threat-related expressions unaffected by a manipulation of attention that strongly modulates the fusiform response to faces.  相似文献   

14.
Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition.  相似文献   

15.
How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person''s face being similarly touched produced a bias in recognizing one''s own face, in the direction of the other person included in the representation of one''s own face. Multisensory integration can update cognitive representations of one''s body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one''s face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.  相似文献   

16.
Psychophysiological experiments were performed on 34 healthy subjects. We analyzed the accuracy and latency of motor response in recognizing two types of complex visual stimuli, animals and objects, which were presented immediately after a brief presentation of face images with different emotional expressions: anger, fear, happiness, and a neutral expression. We revealed the dependence of response latency on emotional expression of the masked face. The response latency was lower when the test stimuli were preceded by angry or fearful faces compared to happy or neutral faces. These effects depended on the type of stimulus and were more expressive when recognizing objects compared to animals. We found that the effects of emotional faces were related to personal features of the subjects that they exhibited in the emotional and communicative blocks of Cattell’s test and were more expressive in more sensitive, anxious, and pessimistic introverts. The mechanisms of the effects of unconsciously perceived emotional information on human visual behavior are discussed.  相似文献   

17.
Affective facial expressions are potent social cues that can induce relevant physiological changes, as well as behavioral dispositions in the observer. Previous studies have revealed that angry faces induced significant reductions in body sway as compared with neutral and happy faces, reflecting an avoidance behavioral tendency as freezing. The expression of pain is usually considered an unpleasant stimulus, but also a relevant cue for delivering effective care and social support. Nevertheless, there are few data about behavioral dispositions elicited by the observation of pain expressions in others. The aim of the present research was to evaluate approach–avoidance tendencies by using video recordings of postural body sway when participants were standing and observing facial expressions of pain, happy and neutral. We hypothesized that although pain faces would be rated as more unpleasant than the other faces, they would provoke significant changes in postural body sway as compared to neutral facial expressions. Forty healthy female volunteers (mean age 25) participated in the study. Amplitude of forward movements and backward movements in the anterior-posterior and medial-lateral axes were obtained. Statistical analyses revealed that pain faces were the most unpleasant stimuli, and that both happy and pain faces were more arousing than neutral ones. Happy and pain faces also elicited greater amplitude of body sway in the anterior-posterior axes as compared with neutral faces. In addition, significant positive correlations were found between body sway elicited by pain faces and pleasantness and empathic ratings, suggesting that changes in postural body sway elicited by pain faces might be associated with approach and cooperative behavioral responses.  相似文献   

18.
Recognition of facial expressions by a Japanese monkey and two humans was studied. The monkey subject matched 20 photographs of monkey facial expressions and 20 photographs of human facial expressions. Humans sorted the same pictures. Matching accuracy by the monkey was about 80% correct for both human and monkey facial expressions. The confusion matrices of those facial expressions were analyzed by a multi-dimensional scaling procedure (MDSCAL). The resulting MDS plots suggested that the important cues in recognizing facial expressions of monkeys were “thrusting the mouth” and ‘raising the eyebrows.” Comparison of the MDS plots by the monkey subject with those by human subjects suggested that the monkey categorized the human “happiness” faces. This may suggest that the monkey has an ability to recognize human smile face even though it is learned. However, the monkey did not differentiate the human “anger/disgust” faces from the human “sad” faces, while human subjects clearly did. This may correlate with the lack of eyebrow movement in monkeys.  相似文献   

19.
Antisocial individuals are characterized to display self-determined and inconsiderate behavior during social interaction. Furthermore, recognition deficits regarding fearful facial expressions have been observed in antisocial populations. These observations give rise to the question whether or not antisocial behavioral tendencies are associated with deficits in basic processing of social cues. The present study investigated early visual stimulus processing of social stimuli in a group of healthy female individuals with antisocial behavioral tendencies compared to individuals without these tendencies while measuring event-related potentials (P1, N170). To this end, happy and angry faces served as feedback stimuli which were embedded in a gambling task. Results showed processing differences as early as 88–120 ms after feedback onset. Participants low on antisocial traits displayed larger P1 amplitudes than participants high on antisocial traits. No group differences emerged for N170 amplitudes. Attention allocation processes, individual arousal levels as well as face processing are discussed as possible causes of the observed group differences in P1 amplitudes. In summary, the current data suggest that sensory processing of facial stimuli is functionally intact but less ready to respond in healthy individuals with antisocial tendencies.  相似文献   

20.
The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50–130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320–450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号