首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.  相似文献   

2.
The ability to recognize emotions contained in facial expressions are affected by both affective traits and states and varies widely between individuals. While affective traits are stable in time, affective states can be regulated more rapidly by environmental stimuli, such as music, that indirectly modulate the brain state. Here, we tested whether a relaxing or irritating sound environment affects implicit processing of facial expressions. Moreover, we investigated whether and how individual traits of anxiety and emotional control interact with this process. 32 healthy subjects performed an implicit emotion processing task (presented to subjects as a gender discrimination task) while the sound environment was defined either by a) a therapeutic music sequence (MusiCure), b) a noise sequence or c) silence. Individual changes in mood were sampled before and after the task by a computerized questionnaire. Additionally, emotional control and trait anxiety were assessed in a separate session by paper and pencil questionnaires. Results showed a better mood after the MusiCure condition compared with the other experimental conditions and faster responses to happy faces during MusiCure compared with angry faces during Noise. Moreover, individuals with higher trait anxiety were faster in performing the implicit emotion processing task during MusiCure compared with Silence. These findings suggest that sound-induced affective states are associated with differential responses to angry and happy emotional faces at an implicit stage of processing, and that a relaxing sound environment facilitates the implicit emotional processing in anxious individuals.  相似文献   

3.
来自记忆、注意和决策等领域的大量研究发现,在加工情绪刺激时老年人具有正性情绪偏向或负性情绪规避的特点.本研究采用oddball变式,将情绪面孔图片作为分心刺激呈现.实验过程中记录被试脑电,考察不同情绪效价对脑电波的影响,同时考察老年人在非任务相关条件下情绪加工和情绪调节的时间进程.研究发现,在相对早期时间窗口(270~460 ms),年轻组脑电不受情绪效价影响,而老年组中悲伤情绪面孔较之快乐和中性情绪面孔引发了一个更大的正成分(P3a).在晚期时间窗口(500~850 ms),年轻组中悲伤情绪面孔吸引了被试更多注意并引发了一个更大的正性慢波.相反,老年组在晚期加工阶段,情绪效价效应消失.研究揭示了老年人和年轻人在加工非任务相关的情绪刺激时存在的时间进程差异,年龄相关的正性情绪效应发生在晚期时间窗口,表现为年轻组的负性情绪偏向和老年组的无情绪偏向.研究结果为社会情绪选择理论提供了来自脑电数据的支持.  相似文献   

4.
Feng C  Wang L  Liu C  Zhu X  Dai R  Mai X  Luo YJ 《PloS one》2012,7(1):e29668
In the current study, we investigated the time course of the implicit processing of affective pictures with an orthogonal design of valence (negative vs. positive) by arousal (low vs. high). Previous studies with explicit tasks suggested that valence mainly modulates early event-related potential (ERP) components, whereas arousal mainly modulates late components. However, in this study with an implicit task, we observed significant interactions between valence and arousal at both early and late stages over both parietal and frontal sites, which were reflected by three different ERP components: P2a (100-200 ms), N2 (200-300 ms), and P3 (300-400 ms). Furthermore, there was also a significant main effect of arousal on P2b (200-300 ms) over parieto-occipital sites. Our results suggest that valence and arousal effects on implicit affective processing are more complicated than previous ERP studies with explicit tasks have revealed.  相似文献   

5.
There is evidence that women are better in recognizing their own and others' emotions. The female advantage in emotion recognition becomes even more apparent under conditions of rapid stimulus presentation. Affective priming paradigms have been developed to examine empirically whether facial emotion stimuli presented outside of conscious awareness color our impressions. It was observed that masked emotional facial expression has an affect congruent influence on subsequent judgments of neutral stimuli. The aim of the present study was to examine the effect of gender on affective priming based on negative and positive facial expression. In our priming experiment sad, happy, neutral, or no facial expression was briefly presented (for 33 ms) and masked by neutral faces which had to be evaluated. 81 young healthy volunteers (53 women) participated in the study. Subjects had no subjective awareness of emotional primes. Women did not differ from men with regard to age, education, intelligence, trait anxiety, or depressivity. In the whole sample, happy but not sad facial expression elicited valence congruent affective priming. Between-group analyses revealed that women manifested greater affective priming due to happy faces than men. Women seem to have a greater ability to perceive and respond to positive facial emotion at an automatic processing level compared to men. High perceptual sensitivity to minimal social-affective signals may contribute to women's advantage in understanding other persons' emotional states.  相似文献   

6.
Skin conductance responses (SCR) measure objective arousal in response to emotionally-relevant stimuli. Central nervous system influence on SCR is exerted differentially by the two hemispheres. Differences between SCR recordings from the left and right hands may therefore be expected. This study focused on emotionally expressive faces, known to be processed differently by the two hemispheres. Faces depicting neutral, happy, sad, angry, fearful or disgusted expressions were presented in two tasks, one with an explicit emotion judgment and the other with an age judgment. We found stronger responses to sad and happy faces compared with neutral from the left hand during the implicit task, and stronger responses to negative emotions compared with neutral from the right hand during the explicit task. Our results suggest that basic social stimuli generate distinct responses on the two hands, no doubt related to the lateralization of social function in the brain.  相似文献   

7.
The effect of cannabis on emotional processing was investigated using event-related potential paradigms (ERPs). ERPs associated with emotional processing of cannabis users, and non-using controls, were recorded and compared during an implicit and explicit emotional expression recognition and empathy task. Comparisons in P3 component mean amplitudes were made between cannabis users and controls. Results showed a significant decrease in the P3 amplitude in cannabis users compared to controls. Specifically, cannabis users showed reduced P3 amplitudes for implicit compared to explicit processing over centro-parietal sites which reversed, and was enhanced, at fronto-central sites. Cannabis users also showed a decreased P3 to happy faces, with an increase to angry faces, compared to controls. These effects appear to increase with those participants that self-reported the highest levels of cannabis consumption. Those cannabis users with the greatest consumption rates showed the largest P3 deficits for explicit processing and negative emotions. These data suggest that there is a complex relationship between cannabis consumption and emotion processing that appears to be modulated by attention.  相似文献   

8.
The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations have not been demonstrated yet. In three experiments we investigated the influence of emotional voice as well as dynamic facial video adaptors on the perception of emotion-ambiguous voices morphed on an angry-to-happy continuum. Contrastive aftereffects were found for unimodal (voice) adaptation conditions, in that test voices were perceived as happier after adaptation to angry voices, and vice versa. Bimodal (voice + dynamic face) adaptors tended to elicit larger contrastive aftereffects. Importantly, crossmodal (dynamic face) adaptors also elicited substantial aftereffects in male, but not in female participants. Our results (1) support the idea of contrastive processing of emotions (2), show for the first time crossmodal adaptation effects under certain conditions, consistent with the idea that emotion processing is multimodal in nature, and (3) suggest gender differences in the sensory integration of facial and vocal emotional stimuli.  相似文献   

9.
There is extensive evidence for an association between an attentional bias towards emotionally negative stimuli and vulnerability to stress-related psychopathology. Less is known about whether selective attention towards emotionally positive stimuli relates to mental health and stress resilience. The current study used a modified Dot Probe task to investigate if individual differences in attentional biases towards either happy or angry emotional stimuli, or an interaction between these biases, are related to self-reported trait stress resilience. In a nonclinical sample (N = 43), we indexed attentional biases as individual differences in reaction time for stimuli preceded by either happy or angry (compared to neutral) face stimuli. Participants with greater attentional bias towards happy faces (but not angry faces) reported higher trait resilience. However, an attentional bias towards angry stimuli moderated this effect: The attentional bias towards happy faces was only predictive for resilience in those individuals who also endorsed an attentional bias towards angry stimuli. An attentional bias towards positive emotional stimuli may thus be a protective factor contributing to stress resilience, specifically in those individuals who also endorse an attentional bias towards negative emotional stimuli. Our findings therefore suggest a novel target for prevention and treatment interventions addressing stress-related psychopathology.  相似文献   

10.
Chen X  Yang J  Gan S  Yang Y 《PloS one》2012,7(1):e30278
Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.  相似文献   

11.
There appears to be a significant disconnect between symptomatic and functional recovery in bipolar disorder (BD). Some evidence points to interepisode cognitive dysfunction. We tested the hypothesis that some of this dysfunction was related to emotional reactivity in euthymic bipolar subjects may effect cognitive processing. A modification of emotional gender categorization oddball task was used. The target was gender (probability 25%) of faces with negative, positive, and neutral emotional expression. The experiment had 720 trials (3 blocks × 240 trials each). Each stimulus was presented for 150 ms, and the EEG/ERP responses were recorded for 1,000 ms. The inter-trial interval was varied in 1,100–1,500 ms range to avoid expectancy effects. Task took about 35 min to complete. There were 9 BD and 9 control subjects matched for age and gender. Reaction time (RT) was globally slower in BD subjects. The centro-parietal amplitudes at N170 and N200, and P200 and P300 were generally smaller in the BD group compared to controls. Latency was shorter to neutral and negative targets in BD. Frontal P200 amplitude was higher to emotional negative facial non-targets in BD subjects. The frontal N200 in response to positive facial emotion was less negative in BD subjects. The frontal P300 of BD subjects was lower to emotionally neutral targets. ERP responses to facial emotion in BD subjects varied significantly from normal controls. These variations are consistent with the common depressive symptomology seen in long term studies of bipolar subjects.  相似文献   

12.
Conflict control is an important cognitive control ability and it is also crucial for human beings to execute conflict control on affective information. To address the neural correlates of cognitive control on affective conflicts, the present study recorded event-related potentials (ERPs) during a revised Eriksen Flanker Task. Participants were required to indicate the valence of the central target expression while ignoring the flanker expressions in the affective congruent condition, affective incongruent condition and neutral condition (target expressions flanked by scramble blocks). Behavioral results manifested that participants exhibited faster response speed in identifying neutral target face when it was flanked by neutral distractors than by happy distractors. Electrophysiological results showed that happy target expression induced larger N2 amplitude when flanked by sad distractors than by happy distractors and scramble blocks during the conflict monitoring processing. During the attentional control processing, happy target expression induced faster P3 response when it was flanked by happy distractors than by sad distractors, and sad target expression evoked larger P3 amplitude when it was flanked by happy distractors comparing with sad distractors. Taken together, the current findings of temporal dynamic of brain activity during cognitive control on affective conflicts shed light on the essential relationship between cognitive control and affective information processing.  相似文献   

13.
Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women), age 18–30 years. Participants were instructed to evaluate emotional expression (angry, happy, and neutral) of each presented face on an analog scale ranging from ?100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500–870 ms), event-related theta synchronization in high emotional intelligence subjects was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon the presentation of angry faces. This suggests the existence of a mechanism that can selectively increase the positive emotions and reduce negative emotions.  相似文献   

14.
Ni J  Jiang H  Jin Y  Chen N  Wang J  Wang Z  Luo Y  Ma Y  Hu X 《PloS one》2011,6(4):e18262
Emotional stimuli have evolutionary significance for the survival of organisms; therefore, they are attention-grabbing and are processed preferentially. The neural underpinnings of two principle emotional dimensions in affective space, valence (degree of pleasantness) and arousal (intensity of evoked emotion), have been shown to be dissociable in the olfactory, gustatory and memory systems. However, the separable roles of valence and arousal in scene perception are poorly understood. In this study, we asked how these two emotional dimensions modulate overt visual attention. Twenty-two healthy volunteers freely viewed images from the International Affective Picture System (IAPS) that were graded for affective levels of valence and arousal (high, medium, and low). Subjects' heads were immobilized and eye movements were recorded by camera to track overt shifts of visual attention. Algebraic graph-based approaches were introduced to model scan paths as weighted undirected path graphs, generating global topology metrics that characterize the algebraic connectivity of scan paths. Our data suggest that human subjects show different scanning patterns to stimuli with different affective ratings. Valence salient stimuli (with neutral arousal) elicited faster and larger shifts of attention, while arousal salient stimuli (with neutral valence) elicited local scanning, dense attention allocation and deep processing. Furthermore, our model revealed that the modulatory effect of valence was linearly related to the valence level, whereas the relation between the modulatory effect and the level of arousal was nonlinear. Hence, visual attention seems to be modulated by mechanisms that are separate for valence and arousal.  相似文献   

15.
Jessen S  Obleser J  Kotz SA 《PloS one》2012,7(4):e36070
Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta-band oscillations (15-25 Hz) primarily reflecting biological motion perception was modulated 200-400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing.  相似文献   

16.
Appearance-based trustworthiness inferences may reflect the misinterpretation of emotional expression cues. Children and adults typically perceive faces that look happy to be relatively trustworthy and those that look angry to be relatively untrustworthy. Given reports of atypical expression perception in children with Autism Spectrum Disorder (ASD), the current study aimed to determine whether the modulation of trustworthiness judgments by emotional expression cues in children with ASD is also atypical. Cognitively-able children with and without ASD, aged 6–12 years, rated the trustworthiness of faces showing happy, angry and neutral expressions. Trust judgments in children with ASD were significantly modulated by overt happy and angry expressions, like those of typically-developing children. Furthermore, subtle emotion cues in neutral faces also influenced trust ratings of the children in both groups. These findings support a powerful influence of emotion cues on perceived trustworthiness, which even extends to children with social cognitive impairments.  相似文献   

17.
Emotion and reward have been proposed to be closely linked to conscious experience, but empirical data are lacking. The anterior cingulate cortex (ACC) plays a central role in the hedonic dimension of conscious experience; thus potentially a key region in interactions between emotion and consciousness. Here we tested the impact of emotion on conscious experience, and directly investigated the role of the ACC. We used a masked paradigm that measures conscious reportability in terms of subjective confidence and objective accuracy in identifying the briefly presented stimulus in a forced-choice test. By manipulating the emotional valence (positive, neutral, negative) and the presentation time (16 ms, 32 ms, 80 ms) we measured the impact of these variables on conscious and subliminal (i.e. below threshold) processing. First, we tested normal participants using face and word stimuli. Results showed that participants were more confident and accurate when consciously seeing happy versus sad/neutral faces and words. When stimuli were presented subliminally, we found no effect of emotion. To investigate the neural basis of this impact of emotion, we recorded local field potentials (LFPs) directly in the ACC in a chronic pain patient. Behavioural findings were replicated: the patient was more confident and accurate when (consciously) seeing happy versus sad faces, while no effect was seen in subliminal trials. Mirroring behavioural findings, we found significant differences in the LFPs after around 500 ms (lasting 30 ms) in conscious trials between happy and sad faces, while no effect was found in subliminal trials. We thus demonstrate a striking impact of emotion on conscious experience, with positive emotional stimuli enhancing conscious reportability. In line with previous studies, the data indicate a key role of the ACC, but goes beyond earlier work by providing the first direct evidence of interaction between emotion and conscious experience in the human ACC.  相似文献   

18.
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants'' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents.  相似文献   

19.
A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as ‘emotional superiority’). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands.  相似文献   

20.
Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant’s global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号