首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There is a growing body of literature to show that color can convey information, owing to its emotionally meaningful associations. Most research so far has focused on negative hue–meaning associations (e.g., red) with the exception of the positive aspects associated with green. We therefore set out to investigate the positive associations of two colors (i.e., green and pink), using an emotional facial expression recognition task in which colors provided the emotional contextual information for the face processing. In two experiments, green and pink backgrounds enhanced happy face recognition and impaired sad face recognition, compared with a control color (gray). Our findings therefore suggest that because green and pink both convey positive information, they facilitate the processing of emotionally congruent facial expressions (i.e., faces expressing happiness) and interfere with that of incongruent facial expressions (i.e., faces expressing sadness). Data also revealed a positive association for white. Results are discussed within the theoretical framework of emotional cue processing and color meaning.  相似文献   

2.
The effect of increasing working memory load (by introduction of an additional cognitive task into the experimental context) on the recognition of emotional facial expression in a visual set paradigm was studied in healthy adult subjects. The link between plasticity of the cognitive set to emotional facial expression and the working memory was revealed. It was found that an increase in the working memory load was associated with a delay of set shifting in a modified situation. The set became more rigid which appeared as increasing number of trials with erroneous assessments of facial expression in the form of contrast or assimilative illusions. The significance of inner states and priming for the insight into psychophysiological mechanisms of erroneous assessments under conditions of the working memory loading is discussed in terms of the concept of the integration of bottom-up and top-down streams.  相似文献   

3.
There is ample evidence to show that many types of visual information, including emotional information, could be processed in the absence of visual awareness. For example, it has been shown that masked subliminal facial expressions can induce priming and adaptation effects. However, stimulus made invisible in different ways could be processed to different extent and have differential effects. In this study, we adopted a flanker type behavioral method to investigate whether a flanker rendered invisible through Continuous Flash Suppression (CFS) could induce a congruency effect on the discrimination of a visible target. Specifically, during the experiment, participants judged the expression (either happy or fearful) of a visible face in the presence of a nearby invisible face (with happy or fearful expression). Results show that participants were slower and less accurate in discriminating the expression of the visible face when the expression of the invisible flanker face was incongruent. Thus, facial expression information rendered invisible with CFS and presented a different spatial location could enhance or interfere with consciously processed facial expression information.  相似文献   

4.
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.  相似文献   

5.
Using a cognitive set to emotional facial expression as a model, induced synchronization/desynchronization of the cortical theta- and alpha-activities were studied in adult healthy people under conditions of increased load on the working memory (additional task of the verbal stimuli recognition). A correlation was found between behavioral (increase in the set rigidity) and electrophysiological (decrease of the induced theta-rhythm synchronization) data. A hypothesis is suggested that the earlier revealed increase in the tonic prestimulus theta-activity and suppression of the poststimulus phasic activation of the cortico-hippocampal system are one of the mechanisms of the decrease in plasticity of the cognitive function of the emotional facial expression recognition under conditions of the increased load on the working memory. Reciprocal relations between two functional systems of the brain activity integration (cortico-hippocampal and fronto-thalamic) in the process of recognition of emotional facial expression are discussed.  相似文献   

6.
Influence of additional working memory load on emotional face recognition was studied in healthy adults. Visual set to emotional face expression was experimentally formed, and two types of additional task--visual-spatial or semantic--were embedded in the experiment. Additional task caused less plastic set, i.e., a slower set-shifting. This effect displayed itself in an increase of erroneous facial expression perceptions. The character of these erroneous perceptions (assimilative or contrast or visual illusions) depended on the type of the additional task. Pre-stimulus EEG coherence across experimental trials in theta (4-7), low alpha (8-10 Hz) and beta (14--20) bands was analysed. Data of low-alpha and beta-coherence supported the hypothesis that increased memory load caused less involvement of frontal lobes in selective attention mechanisms that are associated with set-forming. This results in a slower set-shifting. Increased memory load also led to a growth of theta-band coherence in the left hemisphere and its decrease in the right hemisphere. The account of theta-coherence decrease in the right hemisphere between prefrontal and temporal areas for a slower set-shifting is discussed.  相似文献   

7.
Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants'' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces). A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system.  相似文献   

8.
The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.  相似文献   

9.
A visual set was used as a model to study the influence of the increased memory load on the recognition of facial expression in 70 healthy adults. In order to additionally load the working memory, we lengthened the time gap between target (faces) and trigger stimuli. Such a lengthening from 1 to 8 s resulted in an increase of set plasticity (fewer mistakes in facial expression recognition). It also led to a reduction of the reaction time and less number of contrast illusions in recognition. We analyzed theta- and alpha-band EEG changes during individual segments of the time gap and suggested that repeated trials with a certain fixed interval between stimuli formed an inner representation of the interval duration. This inner representation up-regulates the visual attention in case of anticipation of a relevant event (stimulus) and down-regulates the attention when the stimulus is not expected. In case of the plastic set, the induced EEG synchronization in the alpha band is stronger in the trials with correct recognition in the middle of the inter-stimulus time gap. We think this synchronization reflects the action of the top-down cognitive control that suppresses the influence of irrelevant information on the brain activity. Theta-band dynamics in the inter-stimulus time gap can be associated with the emotional strain caused by the fact that a person had to retain in memory (for several seconds) the result of facial expression recognition.  相似文献   

10.
In 5- to 6-, 7- to 8-, and 10- to 11-year-old children, age-related features of the effects of former experience on the recognition of emotional facial expressions were found using a cognitive set model. In five- to six-year-old children, an inert set to an angry facial expression was formed and expressed during testing as a large number of erroneous recognition of facial expressions of the perseverative type (assimilative) illusions. Set plasticity was increased in seven- to eight-year-old children and the number of assimilative illusions decreased. In 10- to 11-year-old children, the cognitive set was similar to adults in terms of its plasticity and a ratio of assimilative and contrast illusions. Changes in the spatial synchronization of electrical potentials in the ??- and ??-frequency bands were observed in all age groups, mainly during set formation. In all age groups, we observed a correlation between the bioelectrical data and the effects of former experience on the recognition of facial expression. Based on the data on the coherence of the potentials of the ??- and ??-ranges we propose age-related changes in the involvement of the cortico-hippocampal and fronto-thalamic functional systems of integration of brain activity in organizing the sets to emotionally negative facial expressions.  相似文献   

11.
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one''s gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.  相似文献   

12.
Seeing fearful body expressions activates the fusiform cortex and amygdala   总被引:8,自引:0,他引:8  
Darwin's evolutionary approach to organisms' emotional states attributes a prominent role to expressions of emotion in whole-body actions. Researchers in social psychology [1,2] and human development [3] have long emphasized the fact that emotional states are expressed through body movement, but cognitive neuroscientists have almost exclusively considered isolated facial expressions (for review, see [4]). Here we used high-field fMRI to determine the underlying neural mechanisms of perception of body expression of emotion. Subjects were presented with short blocks of body expressions of fear alternating with short blocks of emotionally neutral meaningful body gestures. All images had internal facial features blurred out to avoid confounds due to a face or facial expression. We show that exposure to body expressions of fear, as opposed to neutral body postures, activates the fusiform gyrus and the amygdala. The fact that these two areas have previously been associated with the processing of faces and facial expressions [5-8] suggests synergies between facial and body-action expressions of emotion. Our findings open a new area of investigation of the role of body expressions of emotion in adaptive behavior as well as the relation between processes of emotion recognition in the face and in the body.  相似文献   

13.
Reverse simulation models of facial expression recognition suggest that we recognize the emotions of others by running implicit motor programmes responsible for the production of that expression. Previous work has tested this theory by examining facial expression recognition in participants with Möbius sequence, a condition characterized by congenital bilateral facial paralysis. However, a mixed pattern of findings has emerged, and it has not yet been tested whether these individuals can imagine facial expressions, a process also hypothesized to be underpinned by proprioceptive feedback from the face. We investigated this issue by examining expression recognition and imagery in six participants with Möbius sequence, and also carried out tests assessing facial identity and object recognition, as well as basic visual processing. While five of the six participants presented with expression recognition impairments, only one was impaired at the imagery of facial expressions. Further, five participants presented with other difficulties in the recognition of facial identity or objects, or in lower-level visual processing. We discuss the implications of our findings for the reverse simulation model, and suggest that facial identity recognition impairments may be more severe in the condition than has previously been noted.  相似文献   

14.
There is evidence that women are better in recognizing their own and others' emotions. The female advantage in emotion recognition becomes even more apparent under conditions of rapid stimulus presentation. Affective priming paradigms have been developed to examine empirically whether facial emotion stimuli presented outside of conscious awareness color our impressions. It was observed that masked emotional facial expression has an affect congruent influence on subsequent judgments of neutral stimuli. The aim of the present study was to examine the effect of gender on affective priming based on negative and positive facial expression. In our priming experiment sad, happy, neutral, or no facial expression was briefly presented (for 33 ms) and masked by neutral faces which had to be evaluated. 81 young healthy volunteers (53 women) participated in the study. Subjects had no subjective awareness of emotional primes. Women did not differ from men with regard to age, education, intelligence, trait anxiety, or depressivity. In the whole sample, happy but not sad facial expression elicited valence congruent affective priming. Between-group analyses revealed that women manifested greater affective priming due to happy faces than men. Women seem to have a greater ability to perceive and respond to positive facial emotion at an automatic processing level compared to men. High perceptual sensitivity to minimal social-affective signals may contribute to women's advantage in understanding other persons' emotional states.  相似文献   

15.
Visual adaptation is a powerful tool to probe the short-term plasticity of the visual system. Adapting to local features such as the oriented lines can distort our judgment of subsequently presented lines, the tilt aftereffect. The tilt aftereffect is believed to be processed at the low-level of the visual cortex, such as V1. Adaptation to faces, on the other hand, can produce significant aftereffects in high-level traits such as identity, expression, and ethnicity. However, whether face adaptation necessitate awareness of face features is debatable. In the current study, we investigated whether facial expression aftereffects (FEAE) can be generated by partially visible faces. We first generated partially visible faces using the bubbles technique, in which the face was seen through randomly positioned circular apertures, and selected the bubbled faces for which the subjects were unable to identify happy or sad expressions. When the subjects adapted to static displays of these partial faces, no significant FEAE was found. However, when the subjects adapted to a dynamic video display of a series of different partial faces, a significant FEAE was observed. In both conditions, subjects could not identify facial expression in the individual adapting faces. These results suggest that our visual system is able to integrate unrecognizable partial faces over a short period of time and that the integrated percept affects our judgment on subsequently presented faces. We conclude that FEAE can be generated by partial face with little facial expression cues, implying that our cognitive system fills-in the missing parts during adaptation, or the subcortical structures are activated by the bubbled faces without conscious recognition of emotion during adaptation.  相似文献   

16.
Lee TH  Choi JS  Cho YS 《PloS one》2012,7(3):e32987

Background

Certain facial configurations are believed to be associated with distinct affective meanings (i.e. basic facial expressions), and such associations are common across cultures (i.e. universality of facial expressions). However, recently, many studies suggest that various types of contextual information, rather than facial configuration itself, are important factor for facial emotion perception.

Methodology/Principal Findings

To examine systematically how contextual information influences individuals’ facial emotion perception, the present study estimated direct observers’ perceptual thresholds for detecting negative facial expressions via a forced-choice psychophysical procedure using faces embedded in various emotional contexts. We additionally measured the individual differences in affective information-processing tendency (BIS/BAS) as a possible factor that may determine the extent to which contextual information on facial emotion perception is used. It was found that contextual information influenced observers'' perceptual thresholds for facial emotion. Importantly, individuals’ affective-information tendencies modulated the extent to which they incorporated context information into their facial emotion perceptions.

Conclusions/Significance

The findings of this study suggest that facial emotion perception not only depends on facial configuration, but the context in which the face appears as well. This contextual influence appeared differently with individual’s characteristics of information processing. In summary, we conclude that individual character traits, as well as facial configuration and the context in which a face appears, need to be taken into consideration regarding facial emotional perception.  相似文献   

17.
Many people experience transient difficulties in recognizing faces but only a small number of them cannot recognize their family members when meeting them unexpectedly. Such face blindness is associated with serious problems in everyday life. A better understanding of the neuro-functional basis of impaired face recognition may be achieved by a careful comparison with an equally unique object category and by a adding a more realistic setting involving neutral faces as well facial expressions. We used event-related functional magnetic resonance imaging (fMRI) to investigate the neuro-functional basis of perceiving faces and bodies in three developmental prosopagnosics (DP) and matched healthy controls. Our approach involved materials consisting of neutral faces and bodies as well as faces and bodies expressing fear or happiness. The first main result is that the presence of emotional information has a different effect in the patient vs. the control group in the fusiform face area (FFA). Neutral faces trigger lower activation in the DP group, compared to the control group, while activation for facial expressions is the same in both groups. The second main result is that compared to controls, DPs have increased activation for bodies in the inferior occipital gyrus (IOG) and for neutral faces in the extrastriate body area (EBA), indicating that body and face sensitive processes are less categorically segregated in DP. Taken together our study shows the importance of using naturalistic emotional stimuli for a better understanding of developmental face deficits.  相似文献   

18.
Temporal allocation of attention is often investigated with a paradigm in which two relevant target items are presented in a rapid sequence of irrelevant distractors. The term Attentional Blink (AB) denotes a transient impairment of awareness for the second of these two target items when presented close in time. Experimental studies reported that the AB is reduced when the second target is emotionally significant, suggesting a modulation of attention allocation. The aim of the present study was to systematically investigate the influence of target-distractor similarity on AB magnitude for faces with emotional expressions under conditions of limited attention in a series of six rapid serial visual presentation experiments. The task on the first target was either to discriminate the gender of a neutral face (Experiments 1, 3-6) or an indoor/outdoor visual scene (Experiment 2). The task on the second target required either the detection of emotional expressions (Experiments 1-5) or the detection of a face (Experiment 6). The AB was minimal or absent when targets could be easily discriminated from each other. Three successive experiments revealed that insufficient masking and target-distractor similarity could account for the observed immunity of faces against the AB in the first two experiments. An AB was present but not increased when the facial expression was irrelevant to the task suggesting that target-distractor similarity plays a more important role in eliciting an AB than the attentional set demanded by the specific task. In line with previous work, emotional faces were less affected by the AB.  相似文献   

19.

Background

Previous studies have shown that females and males differ in the processing of emotional facial expressions including the recognition of emotion, and that emotional facial expressions are detected more rapidly than are neutral expressions. However, whether the sexes differ in the rapid detection of emotional facial expressions remains unclear.

Methodology/Principal Findings

We measured reaction times (RTs) during a visual search task in which 44 females and 46 males detected normal facial expressions of anger and happiness or their anti-expressions within crowds of neutral expressions. Anti-expressions expressed neutral emotions with visual changes quantitatively comparable to normal expressions. We also obtained subjective emotional ratings in response to the facial expression stimuli. RT results showed that both females and males detected normal expressions more rapidly than anti-expressions and normal-angry expressions more rapidly than normal-happy expressions. However, females and males showed different patterns in their subjective ratings in response to the facial expressions. Furthermore, sex differences were found in the relationships between subjective ratings and RTs. High arousal was more strongly associated with rapid detection of facial expressions in females, whereas negatively valenced feelings were more clearly associated with the rapid detection of facial expressions in males.

Conclusion

Our data suggest that females and males differ in their subjective emotional reactions to facial expressions and in the emotional processes that modulate the detection of facial expressions.  相似文献   

20.
According to the Darwinian perspective, facial expressions of emotions evolved to quickly communicate emotional states and would serve adaptive functions that promote social interactions. Embodied cognition theories suggest that we understand others' emotions by reproducing the perceived expression in our own facial musculature (facial mimicry) and the mere observation of a facial expression can evoke the corresponding emotion in the perceivers. Consequently, the inability to form facial expressions would affect the experience of emotional understanding. In this review, we aimed at providing account on the link between the lack of emotion production and the mechanisms of emotion processing. We address this issue by taking into account Moebius syndrome, a rare neurological disorder that primarily affects the muscles controlling facial expressions. Individuals with Moebius syndrome are born with facial paralysis and inability to form facial expressions. This makes them the ideal population to study whether facial mimicry is necessary for emotion understanding. Here, we discuss behavioral ambiguous/mixed results on emotion recognition deficits in Moebius syndrome suggesting the need to investigate further aspects of emotional processing such as the physiological responses associated with the emotional experience during developmental age.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号