首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.  相似文献   

2.
Seeing fearful body expressions activates the fusiform cortex and amygdala   总被引:8,自引:0,他引:8  
Darwin's evolutionary approach to organisms' emotional states attributes a prominent role to expressions of emotion in whole-body actions. Researchers in social psychology [1,2] and human development [3] have long emphasized the fact that emotional states are expressed through body movement, but cognitive neuroscientists have almost exclusively considered isolated facial expressions (for review, see [4]). Here we used high-field fMRI to determine the underlying neural mechanisms of perception of body expression of emotion. Subjects were presented with short blocks of body expressions of fear alternating with short blocks of emotionally neutral meaningful body gestures. All images had internal facial features blurred out to avoid confounds due to a face or facial expression. We show that exposure to body expressions of fear, as opposed to neutral body postures, activates the fusiform gyrus and the amygdala. The fact that these two areas have previously been associated with the processing of faces and facial expressions [5-8] suggests synergies between facial and body-action expressions of emotion. Our findings open a new area of investigation of the role of body expressions of emotion in adaptive behavior as well as the relation between processes of emotion recognition in the face and in the body.  相似文献   

3.
Little is known about the spread of emotions beyond dyads. Yet, it is of importance for explaining the emergence of crowd behaviors. Here, we experimentally addressed whether emotional homogeneity within a crowd might result from a cascade of local emotional transmissions where the perception of another’s emotional expression produces, in the observer''s face and body, sufficient information to allow for the transmission of the emotion to a third party. We reproduced a minimal element of a crowd situation and recorded the facial electromyographic activity and the skin conductance response of an individual C observing the face of an individual B watching an individual A displaying either joy or fear full body expressions. Critically, individual B did not know that she was being watched. We show that emotions of joy and fear displayed by A were spontaneously transmitted to C through B, even when the emotional information available in B’s faces could not be explicitly recognized. These findings demonstrate that one is tuned to react to others’ emotional signals and to unintentionally produce subtle but sufficient emotional cues to induce emotional states in others. This phenomenon could be the mark of a spontaneous cooperative behavior whose function is to communicate survival-value information to conspecifics.  相似文献   

4.
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.  相似文献   

5.
Facial expression of emotions is a powerful vehicle for communicating information about others’ emotional states and it normally induces facial mimicry in the observers. The aim of this study was to investigate if early aversive experiences could interfere with emotion recognition, facial mimicry, and with the autonomic regulation of social behaviors. We conducted a facial emotion recognition task in a group of “street-boys” and in an age-matched control group. We recorded facial electromyography (EMG), a marker of facial mimicry, and respiratory sinus arrhythmia (RSA), an index of the recruitment of autonomic system promoting social behaviors and predisposition, in response to the observation of facial expressions of emotions. Results showed an over-attribution of anger, and reduced EMG responses during the observation of both positive and negative expressions only among street-boys. Street-boys also showed lower RSA after observation of facial expressions and ineffective RSA suppression during presentation of non-threatening expressions. Our findings suggest that early aversive experiences alter not only emotion recognition but also facial mimicry of emotions. These deficits affect the autonomic regulation of social behaviors inducing lower social predisposition after the visualization of facial expressions and an ineffective recruitment of defensive behavior in response to non-threatening expressions.  相似文献   

6.
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.  相似文献   

7.
Rigoulot S  Pell MD 《PloS one》2012,7(1):e30740
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.  相似文献   

8.
The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used.  相似文献   

9.
According to the Darwinian perspective, facial expressions of emotions evolved to quickly communicate emotional states and would serve adaptive functions that promote social interactions. Embodied cognition theories suggest that we understand others' emotions by reproducing the perceived expression in our own facial musculature (facial mimicry) and the mere observation of a facial expression can evoke the corresponding emotion in the perceivers. Consequently, the inability to form facial expressions would affect the experience of emotional understanding. In this review, we aimed at providing account on the link between the lack of emotion production and the mechanisms of emotion processing. We address this issue by taking into account Moebius syndrome, a rare neurological disorder that primarily affects the muscles controlling facial expressions. Individuals with Moebius syndrome are born with facial paralysis and inability to form facial expressions. This makes them the ideal population to study whether facial mimicry is necessary for emotion understanding. Here, we discuss behavioral ambiguous/mixed results on emotion recognition deficits in Moebius syndrome suggesting the need to investigate further aspects of emotional processing such as the physiological responses associated with the emotional experience during developmental age.  相似文献   

10.
Emotion processing has been shown to acquire priority by biasing allocation of attentional resources. Aversive images or fearful expressions are processed quickly and automatically. Many existing findings suggested that processing of emotional information was pre-attentive, largely immune from attentional control. Other studies argued that attention gated the processing of emotion. To tackle this controversy, the current study examined whether and to what degrees attention modulated processing of emotion using a stimulus-response-compatibility (SRC) paradigm. We conducted two flanker experiments using color scale faces in neutral expressions or gray scale faces in emotional expressions. We found SRC effects for all three dimensions (color, gender, and emotion) and SRC effects were larger when the conflicts were task relevant than when they were task irrelevant, suggesting that conflict processing of emotion was modulated by attention, similar to those of color and face identity (gender). However, task modulation on color SRC effect was significantly greater than that on gender or emotion SRC effect, indicating that processing of salient information was modulated by attention to a lesser degree than processing of non-emotional stimuli. We proposed that emotion processing can be influenced by attentional control, but at the same time salience of emotional information may bias toward bottom-up processing, rendering less top-down modulation than that on non-emotional stimuli.  相似文献   

11.
Towards the neurobiology of emotional body language   总被引:1,自引:0,他引:1  
People's faces show fear in many different circumstances. However, when people are terrified, as well as showing emotion, they run for cover. When we see a bodily expression of emotion, we immediately know what specific action is associated with a particular emotion, leaving little need for interpretation of the signal, as is the case for facial expressions. Research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are automatically perceived and understood, and their role in emotional communication and decision-making.  相似文献   

12.
Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.  相似文献   

13.
Previous research has suggested that the spontaneous display of positive emotion may be a reliable signal of cooperative tendency in humans. Consistent with this proposition, several studies have found that self-reported cooperators indeed display higher levels of positive emotions than non-cooperators. In this study, we defined cooperators and non-cooperators in terms of their behavior as the proposer in an ultimatum game, and video-taped their facial expressions as they faced unfair offers as a responder. A detailed analysis of the facial expressions displayed by participants revealed that cooperators displayed greater amounts of emotional expressions, not limited to positive emotional expression, when responding to unfair offers in the ultimatum game. These results suggest that cooperators may be more emotionally expressive than non-cooperators. We speculate that emotional expressivity can be a more reliable signal of cooperativeness than the display of positive emotion alone.  相似文献   

14.
When people speak with one another, they tend to adapt their head movements and facial expressions in response to each others'' head movements and facial expressions. We present an experiment in which confederates'' head movements and facial expressions were motion tracked during videoconference conversations, an avatar face was reconstructed in real time, and naive participants spoke with the avatar face. No naive participant guessed that the computer generated face was not video. Confederates'' facial expressions, vocal inflections and head movements were attenuated at 1 min intervals in a fully crossed experimental design. Attenuated head movements led to increased head nods and lateral head turns, and attenuated facial expressions led to increased head nodding in both naive participants and confederates. Together, these results are consistent with a hypothesis that the dynamics of head movements in dyadicconversation include a shared equilibrium. Although both conversational partners were blind to the manipulation, when apparent head movement of one conversant was attenuated, both partners responded by increasing the velocity of their head movements.  相似文献   

15.
Overview of the LiMB database.   总被引:4,自引:3,他引:1       下载免费PDF全文
The rapidly increasing number of databases relevant to molecular biology has given rise to a need for a coordinated effort to identify, characterize, and link them. The LiMB database, which contains information about molecular biology and related databases, is a step in that direction. It serves molecular biologists seeking data sets containing information relevant to their research, and is also intended to anticipate the needs of database designers and managers building software links for related data sets. We present an abbreviated version of the database here; the full database is available free of charge as described below.  相似文献   

16.
Knowing no fear   总被引:2,自引:0,他引:2  
People with brain injuries involving the amygdala are often poor at recognizing facial expressions of fear, but the extent to which this impairment compromises other signals of the emotion of fear has not been clearly established. We investigated N.M., a person with bilateral amygdala damage and a left thalamic lesion, who was impaired at recognizing fear from facial expressions. N.M. showed an equivalent deficit affecting fear recognition from body postures and emotional sounds. His deficit of fear recognition was not linked to evidence of any problem in recognizing anger (a common feature in other reports), but for his everyday experience of emotion N.M. reported reduced anger and fear compared with neurologically normal controls. These findings show a specific deficit compromising the recognition of the emotion of fear from a wide range of social signals, and suggest a possible relationship of this type of impairment with alterations of emotional experience.  相似文献   

17.
The four-dimensional spherical emotional space has been obtained by multi-dimensional scaling of subjective differences between the emotional expressions in sound samples (the words "Yes" and "No" pronounced in different emotional conditions). Euclidean space axes are interpreted as the following neural mechanisms. The first two dimensions are related with the estimation of a sign of emotional condition: the dimension 1--pleasant/unpleasant, useful or not, the dimension 2--an extent of information certainty. The third and the fourth axes are associated with the incentive. The dimension 3 encodes active (anger) or passive (fear) defensive reaction, and the dimension 4 corresponds to achievement. Three angles of four-dimensional hypersphere: the one between the axes 1 and 2, the second between the axes 3 and 4, the third between these two planes determine subjectively experienced emotion characteristics such as described by Vundt emotion modality (pleasure-unpleaure), excitation-quietness-suppression, and tension-relaxation, respectively. Thus, the first and the second angles regulate the modality of ten basic emotions: five emotions determined by a situation and five emotions determined by personal activity. In case of another system of angular parameters (three angles between the axes 4 and 1, 3 and 2, and the angle between the respective planes), another system of emotion classification, which is usually described in the studies of facial expressions (Shlosberg's and Izma?lov's circular system) and semantics (Osgood) can be realized: emotion modality or sign (regulates 6 basic emotions), emotion activity or brightness (excitation-rest) and emotion saturation (strength of emotion expression).  相似文献   

18.
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.  相似文献   

19.
Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant), but one was high-arousing (expressing anger) and the other low-arousing (expressing sadness). Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.  相似文献   

20.
The method of Klára Kokas facilitates a deep musical understanding through the repeated listening to selected short high-quality classical masterpieces. The Kokas-method offers personality-oriented complex forms of creative, artistic expressions to explore music. Participants share their emotions by the freely improvised movements of their body, which Kokas defines as dances. Kokas started her sessions with young children, but later she extended her approach to many different groups. In order to adapt to adults with severe disabilities, the original structure of the Kokas Sessions were slightly modified and adapted to the abilities and needs of the participants. This approach, which based on the bodily responds for music with spontaneous movements, forming connections between music and bodily expression, opens a new avenue in providing meaningful and enjoyable pathways of communication strengthening the sense of belonging. In addition to fulfilling their frequently forgotten aesthetic needs, these sessions help individuals with severe disabilities to develop an emotional relationship to music, reduce challenging behavior, and encourage social interactions their self-awareness thought the bodily responses to the music. The optimal multisensory environment supports their understanding, promotes their physical and emotional well-being and keeps their motivation at a high level. The close analysis of the body movements responding to music may lead to deeper understanding of their cognitive processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号