首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The aim of the present study was to question untested assumptions about the nature of the expression of Attentional Bias (AB) towards and away from threat stimuli. We tested the idea that high trait anxious individuals (N = 106; M(SD)age = 23.9(3.2) years; 68% women) show a stable AB towards multiple categories of threatening information using the emotional visual dot probe task. AB with respect to five categories of threat stimuli (i.e., angry faces, attacking dogs, attacking snakes, pointed weapons, violent scenes) was evaluated. In contrast with current theories, we found that 34% of participants expressed AB towards threat stimuli, 20.8% AB away from threat stimuli, and 34% AB towards some categories of threat stimuli and away from others. The multiple observed expressions of AB were not an artifact of a specific criterion AB score cut-off; not specific to certain categories of threat stimuli; not an artifact of differences in within-subject variability in reaction time; nor accounted for by individual differences in anxiety-related variables. Findings are conceptualized as reflecting the understudied dynamics of AB expression, with implications for AB measurement and quantification, etiology, relations, and intervention research.  相似文献   

2.
Rapid detection of evolutionarily relevant threats (e.g., fearful faces) is important for human survival. The ability to rapidly detect fearful faces exhibits high variability across individuals. The present study aimed to investigate the relationship between behavioral detection ability and brain activity, using both event-related potential (ERP) and event-related oscillation (ERO) measurements. Faces with fearful or neutral facial expressions were presented for 17 ms or 200 ms in a backward masking paradigm. Forty-two participants were required to discriminate facial expressions of the masked faces. The behavioral sensitivity index d'' showed that the detection ability to rapidly presented and masked fearful faces varied across participants. The ANOVA analyses showed that the facial expression, hemisphere, and presentation duration affected the grand-mean ERP (N1, P1, and N170) and ERO (below 20 Hz and lasted from 100 ms to 250 ms post-stimulus, mainly in theta band) brain activity. More importantly, the overall detection ability of 42 subjects was significantly correlated with the emotion effect (i.e., fearful vs. neutral) on ERP (r = 0.403) and ERO (r = 0.552) measurements. A higher d'' value was corresponding to a larger size of the emotional effect (i.e., fearful – neutral) of N170 amplitude and a larger size of the emotional effect of the specific ERO spectral power at the right hemisphere. The present results suggested a close link between behavioral detection ability and the N170 amplitude as well as the ERO spectral power below 20 Hz in individuals. The emotional effect size between fearful and neutral faces in brain activity may reflect the level of conscious awareness of fearful faces.  相似文献   

3.

Background

Little is known about the neural basis of elite performers and their optimal performance in extreme environments. The purpose of this study was to examine brain processing differences between elite warfighters and comparison subjects in brain structures that are important for emotion processing and interoception.

Methodology/Principal Findings

Navy Sea, Air, and Land Forces (SEALs) while off duty (n = 11) were compared with n = 23 healthy male volunteers while performing a simple emotion face-processing task during functional magnetic resonance imaging. Irrespective of the target emotion, elite warfighters relative to comparison subjects showed relatively greater right-sided insula, but attenuated left-sided insula, activation. Navy SEALs showed selectively greater activation to angry target faces relative to fearful or happy target faces bilaterally in the insula. This was not accounted for by contrasting positive versus negative emotions. Finally, these individuals also showed slower response latencies to fearful and happy target faces than did comparison subjects.

Conclusions/Significance

These findings support the hypothesis that elite warfighters deploy greater processing resources toward potential threat-related facial expressions and reduced processing resources to non-threat-related facial expressions. Moreover, rather than expending more effort in general, elite warfighters show more focused neural and performance tuning. In other words, greater neural processing resources are directed toward threat stimuli and processing resources are conserved when facing a nonthreat stimulus situation.  相似文献   

4.
Rigoulot S  Pell MD 《PloS one》2012,7(1):e30740
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.  相似文献   

5.

Background

It is well known that facial expressions represent important social cues. In humans expressing facial emotion, fear may be configured to maximize sensory exposure (e.g., increases visual input) whereas disgust can reduce sensory exposure (e.g., decreases visual input). To investigate whether such effects also extend to the attentional system, we used the “attentional blink” (AB) paradigm. Many studies have documented that the second target (T2) of a pair is typically missed when presented within a time window of about 200–500 ms from the first to-be-detected target (T1; i.e., the AB effect). It has recently been proposed that the AB effect depends on the efficiency of a gating system which facilitates the entrance of relevant input into working memory, while inhibiting irrelevant input. Following the inhibitory response on post T1 distractors, prolonged inhibition of the subsequent T2 is observed. In the present study, we hypothesized that processing facial expressions of emotion would influence this attentional gating. Fearful faces would increase but disgust faces would decrease inhibition of the second target.

Methodology/Principal Findings

We showed that processing fearful versus disgust faces has different effects on these attentional processes. We found that processing fear faces impaired the detection of T2 to a greater extent than did the processing disgust faces. This finding implies emotion-specific modulation of attention.

Conclusions/Significance

Based on the recent literature on attention, our finding suggests that processing fear-related stimuli exerts greater inhibitory responses on distractors relative to processing disgust-related stimuli. This finding is of particular interest for researchers examining the influence of emotional processing on attention and memory in both clinical and normal populations. For example, future research could extend upon the current study to examine whether inhibitory processes invoked by fear-related stimuli may be the mechanism underlying the enhanced learning of fear-related stimuli.  相似文献   

6.

Background

Previous research on the reward system in autism spectrum disorders (ASD) suggests that children with ASD anticipate and process social rewards differently than typically developing (TD) children—but has focused on the reward value of unfamiliar face stimuli. Children with ASD process faces differently than their TD peers. Previous research has focused on face processing of unfamiliar faces, but less is known about how children with ASD process familiar faces. The current study investigated how children with ASD anticipate rewards accompanied by familiar versus unfamiliar faces.

Methods

The stimulus preceding negativity (SPN) of the event-related potential (ERP) was utilized to measure reward anticipation. Participants were 6- to 10-year-olds with (N = 14) and without (N = 14) ASD. Children were presented with rewards accompanied by incidental face or non-face stimuli that were either familiar (caregivers) or unfamiliar. All non-face stimuli were composed of scrambled face elements in the shape of arrows, controlling for visual properties.

Results

No significant differences between familiar versus unfamiliar faces were found for either group. When collapsing across familiarity, TD children showed larger reward anticipation to face versus non-face stimuli, whereas children with ASD did not show differential responses to these stimulus types. Magnitude of reward anticipation to faces was significantly correlated with behavioral measures of social impairment in the ASD group.

Conclusions

The findings do not provide evidence for differential reward anticipation for familiar versus unfamiliar face stimuli in children with or without ASD. These findings replicate previous work suggesting that TD children anticipate rewards accompanied by social stimuli more than rewards accompanied by non-social stimuli. The results do not support the idea that familiarity normalizes reward anticipation in children with ASD. Our findings also suggest that magnitude of reward anticipation to faces is correlated with levels of social impairment for children with ASD.  相似文献   

7.

Background

Classic work on visual short-term memory (VSTM) suggests that people store a limited amount of items for subsequent report. However, when human observers are cued to shift attention to one item in VSTM during retention, it seems as if there is a much larger representation, which keeps additional items in a more fragile VSTM store. Thus far, it is not clear whether the capacity of this fragile VSTM store indeed exceeds the traditional capacity limits of VSTM. The current experiments address this issue and explore the capacity, stability, and duration of fragile VSTM representations.

Methodology/Principal Findings

We presented cues in a change-detection task either just after off-set of the memory array (iconic-cue), 1,000 ms after off-set of the memory array (retro-cue) or after on-set of the probe array (post-cue). We observed three stages in visual information processing 1) iconic memory with unlimited capacity, 2) a four seconds lasting fragile VSTM store with a capacity that is at least a factor of two higher than 3) the robust and capacity-limited form of VSTM. Iconic memory seemed to depend on the strength of the positive after-image resulting from the memory display and was virtually absent under conditions of isoluminance or when intervening light masks were presented. This suggests that iconic memory is driven by prolonged retinal activation beyond stimulus duration. Fragile VSTM representations were not affected by light masks, but were completely overwritten by irrelevant pattern masks that spatially overlapped the memory array.

Conclusions/Significance

We find that immediately after a stimulus has disappeared from view, subjects can still access information from iconic memory because they can see an after-image of the display. After that period, human observers can still access a substantial, but somewhat more limited amount of information from a high-capacity, but fragile VSTM that is overwritten when new items are presented to the eyes. What is left after that is the traditional VSTM store, with a limit of about four objects. We conclude that human observers store more sustained representations than is evident from standard change detection tasks and that these representations can be accessed at will.  相似文献   

8.
Snakes have provided a serious threat to primates throughout evolution. Furthermore, bites by venomous snakes still cause significant morbidity and mortality in tropical regions of the world. According to the Snake Detection Theory (SDT Isbell, 2006; 2009), the vital need to detect camouflaged snakes provided strong evolutionary pressure to develop astute perceptual capacity in animals that were potential targets for snake attacks. We performed a series of behavioral tests that assessed snake detection under conditions that may have been critical for survival. We used spiders as the control stimulus because they are also a common object of phobias and rated negatively by the general population, thus commonly lumped together with snakes as “evolutionary fear-relevant”. Across four experiments (N = 205) we demonstrate an advantage in snake detection, which was particularly obvious under visual conditions known to impede detection of a wide array of common stimuli, for example brief stimulus exposures, stimuli presentation in the visual periphery, and stimuli camouflaged in a cluttered environment. Our results demonstrate a striking independence of snake detection from ecological factors that impede the detection of other stimuli, which suggests that, consistent with the SDT, they reflect a specific biological adaptation. Nonetheless, the empirical tests we report are limited to only one aspect of this rich theory, which integrates findings across a wide array of scientific disciplines.  相似文献   

9.
Congenital prosopagnosia is lifelong face-recognition impairment in the absence of evidence for structural brain damage. To study the neural correlates of congenital prosopagnosia, we measured the face-sensitive N170 component of the event-related potential in three members of the same family (father (56 y), son (25 y) and daughter (22 y)) and in age-matched neurotypical participants (young controls: n = 14; 24.5 y±2.1; old controls: n = 6; 57.3 y±5.4). To compare the face sensitivity of N170 in congenital prosopagnosic and neurotypical participants we measured the event-related potentials for faces and phase-scrambled random noise stimuli. In neurotypicals we found significantly larger N170 amplitude for faces compared to noise stimuli, reflecting normal early face processing. The congenital prosopagnosic participants, by contrast, showed reduced face sensitivity of the N170, and this was due to a larger than normal noise-elicited N170, rather than to a smaller face-elicited N170. Interestingly, single-trial analysis revealed that the lack of face sensitivity in congenital prosopagnosia is related to a larger oscillatory power and phase-locking in the theta frequency-band (4–7 Hz, 130–190 ms) as well as to a lower intertrial jitter of the response latency for the noise stimuli. Altogether, these results suggest that congenital prosopagnosia is due to the deficit of early, structural encoding steps of face perception in filtering between face and non-face stimuli.  相似文献   

10.

Background

Major depressive disorder (MDD) is associated with a mood-congruent processing bias in the amygdala toward face stimuli portraying sad expressions that is evident even when such stimuli are presented below the level of conscious awareness. The extended functional anatomical network that maintains this response bias has not been established, however.

Aims

To identify neural network differences in the hemodynamic response to implicitly presented facial expressions between depressed and healthy control participants.

Method

Unmedicated-depressed participants with MDD (n = 22) and healthy controls (HC; n = 25) underwent functional MRI as they viewed face stimuli showing sad, happy or neutral face expressions, presented using a backward masking design. The blood-oxygen-level dependent (BOLD) signal was measured to identify regions where the hemodynamic response to the emotionally valenced stimuli differed between groups.

Results

The MDD subjects showed greater BOLD responses than the controls to masked-sad versus masked-happy faces in the hippocampus, amygdala and anterior inferotemporal cortex. While viewing both masked-sad and masked-happy faces relative to masked-neutral faces, the depressed subjects showed greater hemodynamic responses than the controls in a network that included the medial and orbital prefrontal cortices and anterior temporal cortex.

Conclusions

Depressed and healthy participants showed distinct hemodynamic responses to masked-sad and masked-happy faces in neural circuits known to support the processing of emotionally valenced stimuli and to integrate the sensory and visceromotor aspects of emotional behavior. Altered function within these networks in MDD may establish and maintain illness-associated differences in the salience of sensory/social stimuli, such that attention is biased toward negative and away from positive stimuli.  相似文献   

11.
It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly.  相似文献   

12.
Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates.  相似文献   

13.
The positional-specificity effect refers to enhanced performance in visual short-term memory (VSTM) when the recognition probe is presented at the same location as had been the sample, even though location is irrelevant to the match/nonmatch decision. We investigated the mechanisms underlying this effect with behavioral and fMRI studies of object change-detection performance. To test whether the positional-specificity effect is a direct consequence of active storage in VSTM, we varied memory load, reasoning that it should be observed for all objects presented in a sub-span array of items. The results, however, indicated that although robust with a memory load of 1, the positional-specificity effect was restricted to the second of two sequentially presented sample stimuli in a load-of-2 experiment. An additional behavioral experiment showed that this disruption wasn’t due to the increased load per se, because actively processing a second object – in the absence of a storage requirement – also eliminated the effect. These behavioral findings suggest that, during tests of object memory, position-related information is not actively stored in VSTM, but may be retained in a passive tag that marks the most recent site of selection. The fMRI data were consistent with this interpretation, failing to find location-specific bias in sustained delay-period activity, but revealing an enhanced response to recognition probes that matched the location of that trial’s sample stimulus.  相似文献   

14.
Criminal investigations often use photographic evidence to identify suspects. Here we combined robust face perception and high-resolution photography to mine face photographs for hidden information. By zooming in on high-resolution face photographs, we were able to recover images of unseen bystanders from reflections in the subjects'' eyes. To establish whether these bystanders could be identified from the reflection images, we presented them as stimuli in a face matching task (Experiment 1). Accuracy in the face matching task was well above chance (50%), despite the unpromising source of the stimuli. Participants who were unfamiliar with the bystanders'' faces (n = 16) performed at 71% accuracy [t(15) = 7.64, p<.0001, d = 1.91], and participants who were familiar with the faces (n = 16) performed at 84% accuracy [t(15) = 11.15, p<.0001, d = 2.79]. In a test of spontaneous recognition (Experiment 2), observers could reliably name a familiar face from an eye reflection image. For crimes in which the victims are photographed (e.g., hostage taking, child sex abuse), reflections in the eyes of the photographic subject could help to identify perpetrators.  相似文献   

15.
Novel stimuli often require a rapid reallocation of sensory processing resources to determine the significance of the event, and the appropriate behavioral response. Both the amygdala and the visual cortex are central elements of the neural circuitry responding to novelty, demonstrating increased activity to new as compared to highly familiarized stimuli. Further, these brain areas are intimately connected, and thus the amygdala may be a key region for directing sensory processing resources to novel events. Although knowledge regarding the neurocircuit of novelty detection is gradually increasing, we still lack a basic understanding of the conditions that are necessary and sufficient for novelty-specific responses in human amygdala and the visual cortices, and if these brain areas interact during detection of novelty. In the present study, we investigated the response of amygdala and the visual cortex to novelty, by comparing functional MRI activity between 1st and 2nd time presentation of a series of emotional faces in an event-related task. We observed a significant decrease in amygdala and visual cortex activity already after a single stimulus exposure. Interestingly, this decrease in responsiveness was less for subjects with a high score on state anxiety. Further, novel faces stimuli were associated with a relative increase in the functional coupling between the amygdala and the inferior occipital gyrus (BA 18). Thus, we suggest that amygdala is involved in fast sensory boosting that may be important for attention reallocation to novel events, and that the strength of this response depends on individual state anxiety.  相似文献   

16.
Pell MD  Kotz SA 《PloS one》2011,6(11):e27256
How quickly do listeners recognize emotions from a speaker''s voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.  相似文献   

17.
The amygdala has been regarded as a key substrate for emotion processing. However, the engagement of the left and right amygdala during the early perceptual processing of different emotional faces remains unclear. We investigated the temporal profiles of oscillatory gamma activity in the amygdala and effective connectivity of the amygdala with the thalamus and cortical areas during implicit emotion-perceptual tasks using event-related magnetoencephalography (MEG). We found that within 100 ms after stimulus onset the right amygdala habituated to emotional faces rapidly (with duration around 20–30 ms), whereas activity in the left amygdala (with duration around 50–60 ms) sustained longer than that in the right. Our data suggest that the right amygdala could be linked to autonomic arousal generated by facial emotions and the left amygdala might be involved in decoding or evaluating expressive faces in the early perceptual emotion processing. The results of effective connectivity provide evidence that only negative emotional processing engages both cortical and subcortical pathways connected to the right amygdala, representing its evolutional significance (survival). These findings demonstrate the asymmetric engagement of bilateral amygdala in emotional face processing as well as the capability of MEG for assessing thalamo-cortico-limbic circuitry.  相似文献   

18.
The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity) and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8–9-year-old schoolgirls and on adult female Campbell''s monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech) emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom) and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation). We evidenced a crossed-categorical effect of social and emotional values in both species since only “negative” voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03). Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates'' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.  相似文献   

19.

Background

In ecological situations, threatening stimuli often come out from the peripheral vision. Such aggressive messages must trigger rapid attention to the periphery to allow a fast and adapted motor reaction. Several clues converge to hypothesize that peripheral danger presentation can trigger off a fast arousal network potentially independent of the consciousness spot.

Methodology/Principal Findings

In the present MEG study, spatio-temporal dynamics of the neural processing of danger related stimuli were explored as a function of the stimuli position in the visual field. Fearful and neutral faces were briefly presented in the central or peripheral visual field, and were followed by target faces stimuli. An event-related beamformer source analysis model was applied in three time windows following the first face presentations: 80 to 130 ms, 140 to 190 ms, and 210 to 260 ms. The frontal lobe and the right internal temporal lobe part, including the amygdala, reacted as soon as 80 ms of latency to fear occurring in the peripheral vision. For central presentation, fearful faces evoked the classical neuronal activity along the occipito-temporal visual pathway between 140 and 190 ms.

Conclusions

Thus, the high spatio-temporal resolution of MEG allowed disclosing a fast response of a network involving medial temporal and frontal structures in the processing of fear related stimuli occurring unconsciously in the peripheral visual field. Whereas centrally presented stimuli are precisely processed by the ventral occipito-temporal cortex, the related-to-danger stimuli appearing in the peripheral visual field are more efficient to produce a fast automatic alert response possibly conveyed by subcortical structures.  相似文献   

20.
Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women), age 18–30 years. Participants were instructed to evaluate emotional expression (angry, happy, and neutral) of each presented face on an analog scale ranging from ?100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500–870 ms), event-related theta synchronization in high emotional intelligence subjects was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon the presentation of angry faces. This suggests the existence of a mechanism that can selectively increase the positive emotions and reduce negative emotions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号