首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

The relationships between facial mimicry and subsequent psychological processes remain unclear. We hypothesized that the congruent facial muscle activity would elicit emotional experiences and that the experienced emotion would induce emotion recognition.

Methodology/Principal Findings

To test this hypothesis, we re-analyzed data collected in two previous studies. We recorded facial electromyography (EMG) from the corrugator supercilii and zygomatic major and obtained ratings on scales of valence and arousal for experienced emotions (Study 1) and for experienced and recognized emotions (Study 2) while participants viewed dynamic and static facial expressions of negative and positive emotions. Path analyses showed that the facial EMG activity consistently predicted the valence ratings for the emotions experienced in response to dynamic facial expressions. The experienced valence ratings in turn predicted the recognized valence ratings in Study 2.

Conclusion

These results suggest that facial mimicry influences the sharing and recognition of emotional valence in response to others'' dynamic facial expressions.  相似文献   

2.
Successful socialization requires the ability of understanding of others’ mental states. This ability called as mentalization (Theory of Mind) may become deficient and contribute to everyday life difficulties in multiple sclerosis. We aimed to explore the impact of brain pathology on mentalization performance in multiple sclerosis. Mentalization performance of 49 patients with multiple sclerosis was compared to 24 age- and gender matched healthy controls. T1- and T2-weighted three-dimensional brain MRI images were acquired at 3Tesla from patients with multiple sclerosis and 18 gender- and age matched healthy controls. We assessed overall brain cortical thickness in patients with multiple sclerosis and the scanned healthy controls, and measured the total and regional T1 and T2 white matter lesion volumes in patients with multiple sclerosis. Performances in tests of recognition of mental states and emotions from facial expressions and eye gazes correlated with both total T1-lesion load and regional T1-lesion load of association fiber tracts interconnecting cortical regions related to visual and emotion processing (genu and splenium of corpus callosum, right inferior longitudinal fasciculus, right inferior fronto-occipital fasciculus, uncinate fasciculus). Both of these tests showed correlations with specific cortical areas involved in emotion recognition from facial expressions (right and left fusiform face area, frontal eye filed), processing of emotions (right entorhinal cortex) and socially relevant information (left temporal pole). Thus, both disconnection mechanism due to white matter lesions and cortical thinning of specific brain areas may result in cognitive deficit in multiple sclerosis affecting emotion and mental state processing from facial expressions and contributing to everyday and social life difficulties of these patients.  相似文献   

3.
Affective computing aims at the detection of users’ mental states, in particular, emotions and dispositions during human-computer interactions. Detection can be achieved by measuring multimodal signals, namely, speech, facial expressions and/or psychobiology. Over the past years, one major approach was to identify the best features for each signal using different classification methods. Although this is of high priority, other subject-specific variables should not be neglected. In our study, we analyzed the effect of gender, age, personality and gender roles on the extracted psychobiological features (derived from skin conductance level, facial electromyography and heart rate variability) as well as the influence on the classification results. In an experimental human-computer interaction, five different affective states with picture material from the International Affective Picture System and ULM pictures were induced. A total of 127 subjects participated in the study. Among all potentially influencing variables (gender has been reported to be influential), age was the only variable that correlated significantly with psychobiological responses. In summary, the conducted classification processes resulted in 20% classification accuracy differences according to age and gender, especially when comparing the neutral condition with four other affective states. We suggest taking age and gender specifically into account for future studies in affective computing, as these may lead to an improvement of emotion recognition accuracy.  相似文献   

4.
Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one’s own face to assimilate another person’s face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer’s motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant’s own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other’s face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.  相似文献   

5.
In daily life, perceivers often need to predict and interpret the behavior of group agents, such as corporations and governments. Although research has investigated how perceivers reason about individual members of particular groups, less is known about how perceivers reason about group agents themselves. The present studies investigate how perceivers understand group agents by investigating the extent to which understanding the ‘mind’ of the group as a whole shares important properties and processes with understanding the minds of individuals. Experiment 1 demonstrates that perceivers are sometimes willing to attribute a mental state to a group as a whole even when they are not willing to attribute that mental state to any of the individual members of the group, suggesting that perceivers can reason about the beliefs and desires of group agents over and above those of their individual members. Experiment 2 demonstrates that the degree of activation in brain regions associated with attributing mental states to individuals—i.e., brain regions associated with mentalizing or theory-of-mind, including the medial prefrontal cortex (MPFC), temporo-parietal junction (TPJ), and precuneus—does not distinguish individual from group targets, either when reading statements about those targets'' mental states (directed) or when attributing mental states implicitly in order to predict their behavior (spontaneous). Together, these results help to illuminate the processes that support understanding group agents themselves.  相似文献   

6.
7.
K Guo 《PloS one》2012,7(8):e42585
Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region), however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions.  相似文献   

8.
Previous research has posited that facial expressions of emotion may serve as honest signals of cooperation. Although findings from several empirical studies support this position, prior studies have not used comprehensive and dynamic measures of facial expression as potential predictors of behaviorally defined cooperation. The authors investigated (a) specific positive and negative facial actions displayed among strangers immediately following verbal promises of commitment within an unrestricted acquaintance period and (b) anonymous, behaviorally defined decisions of cooperation or defection in a one-shot, two-person Prisoner's Dilemma game occurring directly following the acquaintance period. The Facial Action Coding System [Ekman P. & Friesen W.V. (1978). Facial Action Coding System. Palo Alto, CA: Consulting Psychology Press] was used to measure affect-related facial actions. It was found that facial actions related to enjoyment were predictive of cooperative decisions within dyads; additionally, facial actions related to contempt were predictive of noncooperative decisions within dyads. Furthermore, and consistent with previous works, participants were able to accurately predict their partner's decisions following the acquaintance period. These results suggest that facial actions may function as honest signals of cooperative intent. These findings also provide a possible explanation for the association between subjective affective experience and facial expression that advances understanding of cooperative behavior.  相似文献   

9.
During the luteal phase of the menstrual cycle, women's bodies prepare themselves for possible pregnancy and this preparation includes a dramatic increase in progesterone. This increase in progesterone may underlie a variety of functionally relevant psychological changes designed to help women overcome challenges historically encountered during pregnancy (e.g., warding off social threats and recruiting allies). This paper reports data supporting the hypothesis that increases in progesterone during the luteal phase underlie heightened levels of social monitoring—that is, heightened sensitivity to social cues indicating the presence of social opportunity or threat. Increases in progesterone during the luteal phase were associated with increased accuracy in decoding facial expressions (Study 1) and increased attention to social stimuli (Study 2). Findings suggest that increases in progesterone during the luteal phase may be linked functionally with low-level perceptual attunements that help women effectively navigate their social world.  相似文献   

10.
Neuropsychological studies report more impaired responses to facial expressions of fear than disgust in people with amygdala lesions, and vice versa in people with Huntington''s disease. Experiments using functional magnetic resonance imaging (fMRI) have confirmed the role of the amygdala in the response to fearful faces and have implicated the anterior insula in the response to facial expressions of disgust. We used fMRI to extend these studies to the perception of fear and disgust from both facial and vocal expressions. Consistent with neuropsychological findings, both types of fearful stimuli activated the amygdala. Facial expressions of disgust activated the anterior insula and the caudate-putamen; vocal expressions of disgust did not significantly activate either of these regions. All four types of stimuli activated the superior temporal gyrus. Our findings therefore (i) support the differential localization of the neural substrates of fear and disgust; (ii) confirm the involvement of the amygdala in the emotion of fear, whether evoked by facial or vocal expressions; (iii) confirm the involvement of the anterior insula and the striatum in reactions to facial expressions of disgust; and (iv) suggest a possible general role for the perception of emotional expressions for the superior temporal gyrus.  相似文献   

11.
Previous studies have demonstrated that the serotonin transporter gene-linked polymorphic region (5-HTTLPR) affects the recognition of facial expressions and attention to them. However, the relationship between 5-HTTLPR and the perceptual detection of others'' facial expressions, the process which takes place prior to emotional labeling (i.e., recognition), is not clear. To examine whether the perceptual detection of emotional facial expressions is influenced by the allelic variation (short/long) of 5-HTTLPR, happy and sad facial expressions were presented at weak and mid intensities (25% and 50%). Ninety-eight participants, genotyped for 5-HTTLPR, judged whether emotion in images of faces was present. Participants with short alleles showed higher sensitivity (d′) to happy than to sad expressions, while participants with long allele(s) showed no such positivity advantage. This effect of 5-HTTLPR was found at different facial expression intensities among males and females. The results suggest that at the perceptual stage, a short allele enhances the processing of positive facial expressions rather than that of negative facial expressions.  相似文献   

12.
This article introduces recent advances in the machine analysis of facial expressions. It describes the problem space, surveys the problem domain and examines the state of the art. Two recent research topics are discussed with particular attention: analysis of facial dynamics and analysis of naturalistic (spontaneously displayed) facial behaviour. Scientific and engineering challenges in the field in general, and in these specific subproblem areas in particular, are discussed and recommendations for accomplishing a better facial expression measurement technology are outlined.  相似文献   

13.
14.
Many everyday skills are learned by binding otherwise independent actions into a unified sequence of responses across days or weeks of practice. Here we looked at how the dynamics of action planning and response binding change across such long timescales. Subjects (N = 23) were trained on a bimanual version of the serial reaction time task (32-item sequence) for two weeks (10 days total). Response times and accuracy both showed improvement with time, but appeared to be learned at different rates. Changes in response speed across training were associated with dynamic changes in response time variability, with faster learners expanding their variability during the early training days and then contracting response variability late in training. Using a novel measure of response chunking, we found that individual responses became temporally correlated across trials and asymptoted to set sizes of approximately 7 bound responses at the end of the first week of training. Finally, we used a state-space model of the response planning process to look at how predictive (i.e., response anticipation) and error-corrective (i.e., post-error slowing) processes correlated with learning rates for speed, accuracy and chunking. This analysis yielded non-monotonic association patterns between the state-space model parameters and learning rates, suggesting that different parts of the response planning process are relevant at different stages of long-term learning. These findings highlight the dynamic modulation of response speed, variability, accuracy and chunking as multiple movements become bound together into a larger set of responses during sequence learning.  相似文献   

15.
Although cooperation can lead to mutually beneficial outcomes, cooperative actions only pay off for the individual if others can be trusted to cooperate as well. Identifying trustworthy interaction partners is therefore a central challenge in human social life. How do people navigate this challenge? Prior work suggests that people rely on facial appearance to judge the trustworthiness of strangers. However, the question of whether these judgments are actually accurate remains debated. The present research examines accuracy in trustworthiness detection from faces and three moderators proposed by previous research. We investigate whether people show above-chance accuracy (a) when they make trust decisions and when they provide explicit trustworthiness ratings, (b) when judging male and female counterparts, and (c) when rating cropped images (with non-facial features removed) and uncropped images. Two studies showed that incentivized trust decisions (Study 1, n = 131 university students) and incentivized trustworthiness predictions (Study 2, n = 266 university students) were unrelated to the actual trustworthiness of counterparts. Accuracy was not moderated by stimulus type (cropped vs. uncropped faces) or counterparts' gender. Overall, these findings suggest that people are unable to detect the trustworthiness of strangers based on their facial appearance, when this is the only information available to them.  相似文献   

16.
The primate play-face is homologous to the human facial display accompanying laughter. Through facial mimicry, the play-face evokes in the perceiver a similar positive emotional state. This sensorimotor and emotional sharing can be adaptive, as it allows individuals to fine-tune their own motor sequences accordingly thus increasing cooperation in play. It has been recently demonstrated that, not only humans and apes, but also geladas are able to mimic others'' facial expressions. Here, we describe two forms of facial mimicry in Theropithecus gelada: rapid (RFM, within 1.0 s) and delayed (DFM, within 5.0 s). Play interactions characterized by the presence of RFM were longer than those with DFM thus suggesting that RFM is a good indicator of the quality of communicative exchanges and behavioral coordination. These findings agree with the proposal of a mirror mechanism operating during perception and imitation of facial expressions. In an evolutionary perspective, our findings suggest that RFM not only was already present in the common ancestor of cercopitecoids and hominoids, but also that there is a relationship between RFM and length and quality of playful interactions.  相似文献   

17.
Social communication relies on intentional control of emotional expression. Its variability across cultures suggests important roles for imitation in developing control over enactment of subtly different facial expressions and therefore skills in emotional communication. Both empathy and the imitation of an emotionally communicative expression may rely on a capacity to share both the experience of an emotion and the intention or motor plan associated with its expression. Therefore, we predicted that facial imitation ability would correlate with empathic traits. We built arrays of visual stimuli by systematically blending three basic emotional expressions in controlled proportions. Raters then assessed accuracy of imitation by reconstructing the same arrays using photographs of participants’ attempts at imitations of the stimuli. Accuracy was measured as the mean proximity of the participant photographs to the target stimuli in the array. Levels of performance were high, and rating was highly reliable. More empathic participants, as measured by the empathy quotient (EQ), were better facial imitators and, in particular, performed better on the more complex, blended stimuli. This preliminary study offers a simple method for the measurement of facial imitation accuracy and supports the hypothesis that empathic functioning may utilise motor control mechanisms which are also used for emotional expression.  相似文献   

18.

Background

Mental disorders may be reducible to sets of symptoms, connected through systems of causal relations. A clinical staging model predicts that in earlier stages of illness, symptom expression is both non-specific and diffuse. With illness progression, more specific syndromes emerge. This paper addressed the hypothesis that connection strength and connection variability between mental states differ in the hypothesized direction across different stages of psychopathology.

Methods

In a general population sample of female siblings (mostly twins), the Experience Sampling Method was used to collect repeated measures of three momentary mental states (positive affect, negative affect and paranoia). Staging was operationalized across four levels of increasing severity of psychopathology, based on the total score of the Symptom Check List. Multilevel random regression was used to calculate inter- and intra-mental state connection strength and connection variability over time by modelling each momentary mental state at t as a function of the three momentary states at t-1, and by examining moderation by SCL-severity.

Results

Mental states impacted dynamically on each other over time, in interaction with SCL-severity groups. Thus, SCL-90 severity groups were characterized by progressively greater inter- and intra-mental state connection strength, and greater inter- and intra-mental state connection variability.

Conclusion

Diagnosis in psychiatry can be described as stages of growing dynamic causal impact of mental states over time. This system achieves a mode of psychiatric diagnosis that combines nomothetic (group-based classification across stages) and idiographic (individual-specific psychopathological profiles) components of psychopathology at the level of momentary mental states impacting on each other over time.  相似文献   

19.
Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.  相似文献   

20.
Knowing no fear   总被引:2,自引:0,他引:2  
People with brain injuries involving the amygdala are often poor at recognizing facial expressions of fear, but the extent to which this impairment compromises other signals of the emotion of fear has not been clearly established. We investigated N.M., a person with bilateral amygdala damage and a left thalamic lesion, who was impaired at recognizing fear from facial expressions. N.M. showed an equivalent deficit affecting fear recognition from body postures and emotional sounds. His deficit of fear recognition was not linked to evidence of any problem in recognizing anger (a common feature in other reports), but for his everyday experience of emotion N.M. reported reduced anger and fear compared with neurologically normal controls. These findings show a specific deficit compromising the recognition of the emotion of fear from a wide range of social signals, and suggest a possible relationship of this type of impairment with alterations of emotional experience.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号