首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although a great deal of research has been conducted on the recognition of basic facial emotions (e.g., anger, happiness, sadness), much less research has been carried out on the more subtle facial expressions of an individual''s mental state (e.g., anxiety, disinterest, relief). Of particular concern is that these mental state expressions provide a crucial source of communication in everyday life but little is known about the accuracy with which natural dynamic facial expressions of mental states are identified and, in particular, the variability in mental state perception that is produced. Here we report the findings of two studies that investigated the accuracy and variability with which dynamic facial expressions of mental states were identified by participants. Both studies used stimuli carefully constructed using procedures adopted in previous research, and free-report (Study 1) and forced-choice (Study 2) measures of response accuracy and variability. The findings of both studies showed levels of response accuracy that were accompanied by substantial variation in the labels assigned by observers to each mental state. Thus, when mental states are identified from facial expressions in experiments, the identities attached to these expressions appear to vary considerably across individuals. This variability raises important issues for understanding the identification of mental states in everyday situations and for the use of responses in facial expression research.  相似文献   

2.

Background

Previous studies have shown that females and males differ in the processing of emotional facial expressions including the recognition of emotion, and that emotional facial expressions are detected more rapidly than are neutral expressions. However, whether the sexes differ in the rapid detection of emotional facial expressions remains unclear.

Methodology/Principal Findings

We measured reaction times (RTs) during a visual search task in which 44 females and 46 males detected normal facial expressions of anger and happiness or their anti-expressions within crowds of neutral expressions. Anti-expressions expressed neutral emotions with visual changes quantitatively comparable to normal expressions. We also obtained subjective emotional ratings in response to the facial expression stimuli. RT results showed that both females and males detected normal expressions more rapidly than anti-expressions and normal-angry expressions more rapidly than normal-happy expressions. However, females and males showed different patterns in their subjective ratings in response to the facial expressions. Furthermore, sex differences were found in the relationships between subjective ratings and RTs. High arousal was more strongly associated with rapid detection of facial expressions in females, whereas negatively valenced feelings were more clearly associated with the rapid detection of facial expressions in males.

Conclusion

Our data suggest that females and males differ in their subjective emotional reactions to facial expressions and in the emotional processes that modulate the detection of facial expressions.  相似文献   

3.
Psychophysiological experiments were performed on 34 healthy subjects. We analyzed the accuracy and latency of motor response in recognizing two types of complex visual stimuli, animals and objects, which were presented immediately after a brief presentation of face images with different emotional expressions: anger, fear, happiness, and a neutral expression. We revealed the dependence of response latency on emotional expression of the masked face. The response latency was lower when the test stimuli were preceded by angry or fearful faces compared to happy or neutral faces. These effects depended on the type of stimulus and were more expressive when recognizing objects compared to animals. We found that the effects of emotional faces were related to personal features of the subjects that they exhibited in the emotional and communicative blocks of Cattell’s test and were more expressive in more sensitive, anxious, and pessimistic introverts. The mechanisms of the effects of unconsciously perceived emotional information on human visual behavior are discussed.  相似文献   

4.
It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information.  相似文献   

5.
The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions.  相似文献   

6.
A number of recent studies have demonstrated superior visual processing when the information is distributed across the left and right visual fields than if the information is presented in a single hemifield (the bilateral field advantage). This effect is thought to reflect independent attentional resources in the two hemifields and the capacity of the neural responses to the left and right hemifields to process visual information in parallel. Here, we examined whether a bilateral field advantage can also be observed in a high-level visual task that requires the information from both hemifields to be combined. To this end, we used a visual enumeration task--a task that requires the assimilation of separate visual items into a single quantity--where the to-be-enumerated items were either presented in one hemifield or distributed between the two visual fields. We found that enumerating large number (>4 items), but not small number (<4 items), exhibited the bilateral field advantage: enumeration was more accurate when the visual items were split between the left and right hemifields than when they were all presented within the same hemifield. Control experiments further showed that this effect could not be attributed to a horizontal alignment advantage of the items in the visual field, or to a retinal stimulation difference between the unilateral and bilateral displays. These results suggest that a bilateral field advantage can arise when the visual task involves inter-hemispheric integration. This is in line with previous research and theory indicating that, when the visual task is attentionally demanding, parallel processing by the neural responses to the left and right hemifields can expand the capacity of visual information processing.  相似文献   

7.
Pell MD  Kotz SA 《PloS one》2011,6(11):e27256
How quickly do listeners recognize emotions from a speaker''s voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.  相似文献   

8.
A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach.  相似文献   

9.

Background

Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups.

Methodology/Principal Findings

Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition.

Conclusions/Significance

Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.  相似文献   

10.
Our knowledge about affective processes, especially concerning effects on cognitive demands like word processing, is increasing steadily. Several studies consistently document valence and arousal effects, and although there is some debate on possible interactions and different notions of valence, broad agreement on a two dimensional model of affective space has been achieved. Alternative models like the discrete emotion theory have received little interest in word recognition research so far. Using backward elimination and multiple regression analyses, we show that five discrete emotions (i.e., happiness, disgust, fear, anger and sadness) explain as much variance as two published dimensional models assuming continuous or categorical valence, with the variables happiness, disgust and fear significantly contributing to this account. Moreover, these effects even persist in an experiment with discrete emotion conditions when the stimuli are controlled for emotional valence and arousal levels. We interpret this result as evidence for discrete emotion effects in visual word recognition that cannot be explained by the two dimensional affective space account.  相似文献   

11.
It has been shown that dominant individuals sustain eye-contact when non-consciously confronted with angry faces, suggesting reflexive mechanisms underlying dominance behaviors. However, dominance and submission can be conveyed and provoked by means of not only facial but also bodily features. So far few studies have investigated the interplay of body postures with personality traits and behavior, despite the biological relevance and ecological validity of these postures. Here we investigate whether non-conscious exposure to bodily expressions of anger evokes reflex-like dominance behavior. In an interactive eye-tracking experiment thirty-two participants completed three social dominance tasks with angry, happy and neutral facial, bodily and face and body compound expressions that were masked from consciousness. We confirmed our predictions of slower gaze-aversion from both non-conscious bodily and compound expressions of anger compared to happiness in high dominant individuals. Results from a follow-up experiment suggest that the dominance behavior triggered by exposure to bodily anger occurs with basic detection of the category, but not recognition of the emotional content. Together these results suggest that dominant staring behavior is reflexively driven by non-conscious perception of the emotional content and triggered by not only facial but also bodily expression of anger.  相似文献   

12.
A correlation between some characteristics of the visual evoked potentials and individual personality traits (by the Kettell scale) was revealed in 40 healthy subjects when they recognized facial expressions of anger and fear. As compared to emotionally stable subjects, emotionally unstable subjects had shorter latencies of evoked potentials and suppressed late negativity in the occipital and temporal areas. In contrast, amplitude of these waves in the frontal areas was increased. In emotionally stable group of subjects differences in the evoked potentials related to emotional expressions were evident throughout the whole signal processing beginning from the early sensory stage (P1 wave). In emotionally unstable group differences in the evoked potentials related to recognized emotional expressions developed later. Sensitivity of the evoked potentials to emotional salience of faces was also more pronounced in the emotionally stable group. The involvement of the frontal cortex, amygdala, and the anterior cingulate cortex in the development of individual features of recognition of facial expressions of anger and fear is discussed.  相似文献   

13.
Rigoulot S  Pell MD 《PloS one》2012,7(1):e30740
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.  相似文献   

14.
Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant), but one was high-arousing (expressing anger) and the other low-arousing (expressing sadness). Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.  相似文献   

15.
We propose to use a comprehensive path model of vocal emotion communication, encompassing encoding, transmission, and decoding processes, to empirically model data sets on emotion expression and recognition. The utility of the approach is demonstrated for two data sets from two different cultures and languages, based on corpora of vocal emotion enactment by professional actors and emotion inference by naïve listeners. Lens model equations, hierarchical regression, and multivariate path analysis are used to compare the relative contributions of objectively measured acoustic cues in the enacted expressions and subjective voice cues as perceived by listeners to the variance in emotion inference from vocal expressions for four emotion families (fear, anger, happiness, and sadness). While the results confirm the central role of arousal in vocal emotion communication, the utility of applying an extended path modeling framework is demonstrated by the identification of unique combinations of distal cues and proximal percepts carrying information about specific emotion families, independent of arousal. The statistical models generated show that more sophisticated acoustic parameters need to be developed to explain the distal underpinnings of subjective voice quality percepts that account for much of the variance in emotion inference, in particular voice instability and roughness. The general approach advocated here, as well as the specific results, open up new research strategies for work in psychology (specifically emotion and social perception research) and engineering and computer science (specifically research and development in the domain of affective computing, particularly on automatic emotion detection and synthetic emotion expression in avatars).  相似文献   

16.
We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion) were presented under brief (Stimulus Onset Asynchrony, SOA = 20ms) and extended (SOA = 300ms) conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.  相似文献   

17.
Many authors have proposed that facial expressions, by conveying emotional states of the person we are interacting with, influence the interaction behavior. We aimed at verifying how specific the effect is of the facial expressions of emotions of an individual (both their valence and relevance/specificity for the purpose of the action) with respect to how the action aimed at the same individual is executed. In addition, we investigated whether and how the effects of emotions on action execution are modulated by participants'' empathic attitudes. We used a kinematic approach to analyze the simulation of feeding others, which consisted of recording the “feeding trajectory” by using a computer mouse. Actors could express different highly arousing emotions, namely happiness, disgust, anger, or a neutral expression. Response time was sensitive to the interaction between valence and relevance/specificity of emotion: disgust caused faster response. In addition, happiness induced slower feeding time and longer time to peak velocity, but only in blocks where it alternated with expressions of disgust. The kinematic profiles described how the effect of the specificity of the emotional context for feeding, namely a modulation of accuracy requirements, occurs. An early acceleration in kinematic relative-to-neutral feeding profiles occurred when actors expressed positive emotions (happiness) in blocks with specific-to-feeding negative emotions (disgust). On the other hand, the end-part of the action was slower when feeding happy with respect to neutral faces, confirming the increase of accuracy requirements and motor control. These kinematic effects were modulated by participants'' empathic attitudes. In conclusion, the social dimension of emotions, that is, their ability to modulate others'' action planning/execution, strictly depends on their relevance and specificity to the purpose of the action. This finding argues against a strict distinction between social and nonsocial emotions.  相似文献   

18.
Life satisfaction refers to a somewhat stable cognitive assessment of one’s own life. Life satisfaction is an important component of subjective well being, the scientific term for happiness. The other component is affect: the balance between the presence of positive and negative emotions in daily life. While affect has been studied using social media datasets (particularly from Twitter), life satisfaction has received little to no attention. Here, we examine trends in posts about life satisfaction from a two-year sample of Twitter data. We apply a surveillance methodology to extract expressions of both satisfaction and dissatisfaction with life. A noteworthy result is that consistent with their definitions trends in life satisfaction posts are immune to external events (political, seasonal etc.) unlike affect trends reported by previous researchers. Comparing users we find differences between satisfied and dissatisfied users in several linguistic, psychosocial and other features. For example the latter post more tweets expressing anger, anxiety, depression, sadness and on death. We also study users who change their status over time from satisfied with life to dissatisfied or vice versa. Noteworthy is that the psychosocial tweet features of users who change from satisfied to dissatisfied are quite different from those who stay satisfied over time. Overall, the observations we make are consistent with intuition and consistent with observations in the social science research. This research contributes to the study of the subjective well being of individuals through social media.  相似文献   

19.
Emotional facial expressions provide important nonverbal cues in human interactions. The perception of emotions is not only influenced by a person’s ethnic background but also depends on whether a person is engaged with the emotion-encoder. Although these factors are known to affect emotion perception, their impact has only been studied in isolation before. The aim of the present study was to investigate their combined influence. Thus, in order to study the influence of engagement on emotion perception between persons from different ethnicities, we compared participants from China and Germany. Asian-looking and European-looking virtual agents expressed anger and happiness while gazing at the participant or at another person. Participants had to assess the perceived valence of the emotional expressions. Results indicate that indeed two factors that are known to have a considerable influence on emotion perception interacted in their combined influence: We found that the perceived intensity of an emotion expressed by ethnic in-group members was in most cases independent of gaze direction, whereas gaze direction had an influence on the emotion perception of ethnic out-group members. Additionally, participants from the ethnic out-group tended to perceive emotions as more pronounced than participants from the ethnic in-group when they were directly gazed at. These findings suggest that gaze direction has a differential influence on ethnic in-group and ethnic out-group dynamics during emotion perception.  相似文献   

20.
A key to understanding visual cognition is to determine when, how, and with what information the human brain distinguishes between visual categories. So far, the dynamics of information processing for categorization of visual stimuli has not been elucidated. By using an ecologically important categorization task (seven expressions of emotion), we demonstrate, in three human observers, that an early brain event (the N170 Event Related Potential, occurring 170 ms after stimulus onset) integrates visual information specific to each expression, according to a pattern. Specifically, starting 50 ms prior to the ERP peak, facial information tends to be integrated from the eyes downward in the face. This integration stops, and the ERP peaks, when the information diagnostic for judging a particular expression has been integrated (e.g., the eyes in fear, the corners of the nose in disgust, or the mouth in happiness). Consequently, the duration of information integration from the eyes down determines the latency of the N170 for each expression (e.g., with "fear" being faster than "disgust," itself faster than "happy"). For the first time in visual categorization, we relate the dynamics of an important brain event to the dynamics of a precise information-processing function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号