首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer''s own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.  相似文献   

2.
Emotional signals are perceived whether or not we are aware of it. The evidence so far mostly came from studies with facial expressions. Here, we investigated whether the pattern of non-conscious face expression perception is found for whole body expressions. Continuous flash suppression (CFS) was used to measure the time for neutral, fearful, and angry facial or bodily expressions to break from suppression. We observed different suppression time patterns for emotions depending on whether the stimuli were faces or bodies. The suppression time for anger was shortest for bodily expressions, but longest for the facial expressions. This pattern indicates different processing and detection mechanisms for faces and bodies outside awareness, and suggests that awareness mechanisms associated with dorsal structures might play a role in becoming conscious of angry bodily expressions.  相似文献   

3.
Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant’s global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.  相似文献   

4.
The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli.  相似文献   

5.
Antisocial individuals are characterized to display self-determined and inconsiderate behavior during social interaction. Furthermore, recognition deficits regarding fearful facial expressions have been observed in antisocial populations. These observations give rise to the question whether or not antisocial behavioral tendencies are associated with deficits in basic processing of social cues. The present study investigated early visual stimulus processing of social stimuli in a group of healthy female individuals with antisocial behavioral tendencies compared to individuals without these tendencies while measuring event-related potentials (P1, N170). To this end, happy and angry faces served as feedback stimuli which were embedded in a gambling task. Results showed processing differences as early as 88–120 ms after feedback onset. Participants low on antisocial traits displayed larger P1 amplitudes than participants high on antisocial traits. No group differences emerged for N170 amplitudes. Attention allocation processes, individual arousal levels as well as face processing are discussed as possible causes of the observed group differences in P1 amplitudes. In summary, the current data suggest that sensory processing of facial stimuli is functionally intact but less ready to respond in healthy individuals with antisocial tendencies.  相似文献   

6.
Facial expressions of emotion play a key role in guiding social judgements, including deciding whether or not to approach another person. However, no research has examined how situational context modulates approachability judgements assigned to emotional faces, or the relationship between perceived threat and approachability judgements. Fifty-two participants provided approachability judgements to angry, disgusted, fearful, happy, neutral, and sad faces across three situational contexts: no context, when giving help, and when receiving help. Participants also rated the emotional faces for level of perceived threat and labelled the facial expressions. Results indicated that context modulated approachability judgements to faces depicting negative emotions. Specifically, faces depicting distress-related emotions (i.e., sadness and fear) were considered more approachable in the giving help context than both the receiving help and neutral context. Furthermore, higher ratings of threat were associated with the assessment of angry, happy and neutral faces as less approachable. These findings are the first to demonstrate the significant role that context plays in the evaluation of an individual’s approachability and illustrate the important relationship between perceived threat and the evaluation of approachability.  相似文献   

7.
Research in both infants and adults demonstrated that attachment expectations are associated with the attentional processing of attachment-related information. However, this research suffered from methodological issues and has not been validated across ages. Employing a more ecologically valid paradigm to measure attentional processes by virtue of eye tracking, the current study tested the defensive exclusion hypothesis in late childhood. According to this hypothesis, insecurely attached children are assumed to defensively exclude attachment-related information. We hypothesized that securely attached children process attachment- related neutral and emotional information in a more open manner compared to insecurely attached children. Sixty-two children (59.7% girls, 8–12 years) completed two different tasks, while eye movements were recorded: task one presented an array of neutral faces including mother and unfamiliar women and task two presented the same with happy and angry faces. Results indicated that more securely attached children looked longer at mother’s face regardless of the emotional expression. Also, they tend to have more maintained attention to mother’s neutral face. Furthermore, more attachment avoidance was related to a reduced total viewing time of mother’s neutral, happy, and angry face. Attachment anxiety was not consistently related to the processing of mother’s face. Findings support the theoretical assumption that securely attached children have an open manner of processing all attachment-related information.  相似文献   

8.
Facial expressions play an important role in successful social interactions, with previous research suggesting that facial expressions may be processed involuntarily. In the current study, we investigate whether involuntary processing of facial expressions would also occur when facial expression distractors are simultaneously presented in the same spatial location as facial expression targets. Targets and distractors from another stimulus class (lions) were also used. Results indicated that angry facial expression distractors interfered more than neutral face distractors with the ability to respond to both face and lion targets. These findings suggest that information from angry facial expressions can be extracted rapidly from a very brief presentation (50 ms), providing compelling evidence that angry facial expressions are processed involuntarily.  相似文献   

9.
Jiang Y  He S 《Current biology : CB》2006,16(20):2023-2029
Perceiving faces is critical for social interaction. Evidence suggests that different neural pathways may be responsible for processing face identity and expression information. By using functional magnetic resonance imaging (fMRI), we measured brain responses when observers viewed neutral, fearful, and scrambled faces, either visible or rendered invisible through interocular suppression. The right fusiform face area (FFA), the right superior temporal sulcus (STS), and the amygdala responded strongly to visible faces. However, when face images became invisible, activity in FFA to both neutral and fearful faces was much reduced, although still measurable; activity in the STS was robust only to invisible fearful faces but not to neutral faces. Activity in the amygdala was equally strong in both the visible and invisible conditions to fearful faces but much weaker in the invisible condition for the neutral faces. In the invisible condition, amygdala activity was highly correlated with that of the STS but not with FFA. The results in the invisible condition support the existence of dissociable neural systems specialized for processing facial identity and expression information. When images are invisible, cortical responses may reflect primarily feed-forward visual-information processing and thus allow us to reveal the distinct functions of FFA and STS.  相似文献   

10.

Background

Little is known about the neural basis of elite performers and their optimal performance in extreme environments. The purpose of this study was to examine brain processing differences between elite warfighters and comparison subjects in brain structures that are important for emotion processing and interoception.

Methodology/Principal Findings

Navy Sea, Air, and Land Forces (SEALs) while off duty (n = 11) were compared with n = 23 healthy male volunteers while performing a simple emotion face-processing task during functional magnetic resonance imaging. Irrespective of the target emotion, elite warfighters relative to comparison subjects showed relatively greater right-sided insula, but attenuated left-sided insula, activation. Navy SEALs showed selectively greater activation to angry target faces relative to fearful or happy target faces bilaterally in the insula. This was not accounted for by contrasting positive versus negative emotions. Finally, these individuals also showed slower response latencies to fearful and happy target faces than did comparison subjects.

Conclusions/Significance

These findings support the hypothesis that elite warfighters deploy greater processing resources toward potential threat-related facial expressions and reduced processing resources to non-threat-related facial expressions. Moreover, rather than expending more effort in general, elite warfighters show more focused neural and performance tuning. In other words, greater neural processing resources are directed toward threat stimuli and processing resources are conserved when facing a nonthreat stimulus situation.  相似文献   

11.
Infant emotional expressions, such as distress cries, evoke maternal physiological reactions. Most of which involve accelerated sympathetic nervous activity. Comparatively little is known about effects of positive infant expressions, such as happy smiles, on maternal physiological responses. This study investigated how physiological and psychological maternal states change in response to infants’ emotional expressions. Thirty first-time mothers viewed films of their own 6- to 7-month-old infants’ affective behavior. Each observed a video of a distress cry followed by a video showing one of two expressions (randomly assigned): a happy smiling face (smile condition) or a calm neutral face (neutral condition). Both before and after the session, participants completed a self-report inventory assessing their emotional states. The results of the self-report inventory revealed no effects of exposure to the infant videos. However, the mothers in the smile condition, but not in the neutral condition, showed deceleration of skin conductance. These findings demonstrate that the mothers who observed their infants smiling showed decreased sympathetic activity. We propose that an infant’s positive emotional expression may affect the branch of the maternal stress-response system that modulates the homeostatic balance of the sympathetic and parasympathetic nervous systems.  相似文献   

12.
In a dual-task paradigm, participants performed a spatial location working memory task and a forced two-choice perceptual decision task (neutral vs. fearful) with gradually morphed emotional faces (neutral ∼ fearful). Task-irrelevant word distractors (negative, neutral, and control) were experimentally manipulated during spatial working memory encoding. We hypothesized that, if affective perception is influenced by concurrent cognitive load using a working memory task, task-irrelevant emotional distractors would bias subsequent perceptual decision-making on ambiguous facial expression. We found that when either neutral or negative emotional words were presented as task-irrelevant working-memory distractors, participants more frequently reported fearful face perception - but only at the higher emotional intensity levels of morphed faces. Also, the affective perception bias due to negative emotional distractors correlated with a decrease in working memory performance. Taken together, our findings suggest that concurrent working memory load by task-irrelevant distractors has an impact on affective perception of facial expressions.  相似文献   

13.
This work builds on the enfacement effect. This effect occurs when experiencing a rhythmic stimulation on one’s cheek while seeing someone else’s face being touched in a synchronous way. This typically leads to cognitive and social-cognitive effects similar to self-other merging. In two studies, we demonstrate that this multisensory stimulation can change the evaluation of the other’s face. In the first study, participants judged the stranger’s face and similar faces as being more trustworthy after synchrony, but not after asynchrony. Synchrony interacted with the order of the stroking; hence trustworthiness only changed when the synchronous stimulation occurred before the asynchronous one. In the second study, a synchronous stimulation caused participants to remember the stranger’s face as more trustworthy, but again only when the synchronous stimulation came before the asynchronous one. The results of both studies show that order of stroking creates a context in which multisensory synchrony can affect the trustworthiness of faces.  相似文献   

14.
It has been shown that dominant individuals sustain eye-contact when non-consciously confronted with angry faces, suggesting reflexive mechanisms underlying dominance behaviors. However, dominance and submission can be conveyed and provoked by means of not only facial but also bodily features. So far few studies have investigated the interplay of body postures with personality traits and behavior, despite the biological relevance and ecological validity of these postures. Here we investigate whether non-conscious exposure to bodily expressions of anger evokes reflex-like dominance behavior. In an interactive eye-tracking experiment thirty-two participants completed three social dominance tasks with angry, happy and neutral facial, bodily and face and body compound expressions that were masked from consciousness. We confirmed our predictions of slower gaze-aversion from both non-conscious bodily and compound expressions of anger compared to happiness in high dominant individuals. Results from a follow-up experiment suggest that the dominance behavior triggered by exposure to bodily anger occurs with basic detection of the category, but not recognition of the emotional content. Together these results suggest that dominant staring behavior is reflexively driven by non-conscious perception of the emotional content and triggered by not only facial but also bodily expression of anger.  相似文献   

15.
16.
Despite extensive research on face perception, few studies have investigated individuals’ knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual’s features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one’s own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one’s own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.  相似文献   

17.
The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50–130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320–450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.  相似文献   

18.
Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates.  相似文献   

19.
Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.  相似文献   

20.
Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号