首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
跨通路注意的失匹配负波研究   总被引:1,自引:0,他引:1  
采用“跨通路延迟反应”实验模式 ,提高了非注意纯度 ,排除了偏差刺激中的目标任务因素 ,以 1 2名青年正常人为被试者 ,研究注意与非注意条件下的ERPs.实验分为 2项 :( 1 )注意视觉通路 ,忽视听觉通路 ;( 2 )注意听觉通路 ,忽视视觉通路 .主要分析偏差刺激ERP减去标准刺激ERP所得之偏差成分 .结果发现视听通路在注意条件下均能产生MMN ,N2b和P3,在非注意时主要产生MMN .听觉MMN与视觉MMN具有如下共同特征 :注意时视听最大MMN波幅分布于它们的初级感觉投射区 ,而在非注意条件下视听最大MMN波幅均分布于额中央部 .视觉MMN与听觉MMN波幅无显著性差异 .MMN波幅及其头皮分布不受注意影响 ,提示MMN波幅是反映自动加工的重要ERP指标 ;但视听觉MMN潜伏期皆受注意影响 ,表明MMN不仅仅反映自动加工 ,尚与控制加工有关  相似文献   

2.
杨洁敏  袁加锦  李红 《中国科学C辑》2009,39(10):995-1004
采用事件相关电位技术,本研究考察了情绪预期对人类恐惧面孔敏感性的影响.实验记录了被试在线索提示条件及无线索提示条件下对恐惧和中性面孔进行性别判断时的ERP数据.行为结果显示,不可预期条件下被试对恐惧面孔性别判断的正确率显著低于对中性面孔的正确率,表明不可预期的恐惧情绪对实验任务产生了显著地干扰效应.ERP数据显示,在不可预期条件下,在P2及200~250ms区间,恐惧面孔比中性面孔诱发了更大的波幅,情绪效应显著;而可预期条件下,人脑对恐惧面孔和中性面孔的反应类似,表现出情绪效应的显著降低.因此,人类情绪加工的负性偏向效应受个体情绪预期的调节.人类对不可预期的负性情绪敏感,相反当恐惧情绪的发生可以被提前预期时,个体对它们的神经敏感性降低.  相似文献   

3.
来自记忆、注意和决策等领域的大量研究发现,在加工情绪刺激时老年人具有正性情绪偏向或负性情绪规避的特点.本研究采用oddball变式,将情绪面孔图片作为分心刺激呈现.实验过程中记录被试脑电,考察不同情绪效价对脑电波的影响,同时考察老年人在非任务相关条件下情绪加工和情绪调节的时间进程.研究发现,在相对早期时间窗口(270~460 ms),年轻组脑电不受情绪效价影响,而老年组中悲伤情绪面孔较之快乐和中性情绪面孔引发了一个更大的正成分(P3a).在晚期时间窗口(500~850 ms),年轻组中悲伤情绪面孔吸引了被试更多注意并引发了一个更大的正性慢波.相反,老年组在晚期加工阶段,情绪效价效应消失.研究揭示了老年人和年轻人在加工非任务相关的情绪刺激时存在的时间进程差异,年龄相关的正性情绪效应发生在晚期时间窗口,表现为年轻组的负性情绪偏向和老年组的无情绪偏向.研究结果为社会情绪选择理论提供了来自脑电数据的支持.  相似文献   

4.
来自记忆、注意和决策等领域的大量研究发现,在加工情绪刺激时老年人具有正性情绪偏向或负性情绪规避的特点.本研究采用oddball变式,将情绪面孔图片作为分心刺激呈现.实验过程中记录被试脑电,考察不同情绪效价对脑电波的影响,同时考察老年人在非任务相关条件下情绪加工和情绪调节的时间进程.研究发现,在相对早期时间窗口(270~460 ms),年轻组脑电不受情绪效价影响,而老年组中悲伤情绪面孔较之快乐和中性情绪面孔引发了一个更大的正成分(P3a).在晚期时间窗口(500~850 ms),年轻组中悲伤情绪面孔吸引了被试更多注意并引发了一个更大的正性慢波.相反,老年组在晚期加工阶段,情绪效价效应消失.研究揭示了老年人和年轻人在加工非任务相关的情绪刺激时存在的时间进程差异,年龄相关的正性情绪效应发生在晚期时间窗口,表现为年轻组的负性情绪偏向和老年组的无情绪偏向.研究结果为社会情绪选择理论提供了来自脑电数据的支持.  相似文献   

5.
声音通讯对发声动物的生存和繁殖起着重要作用。但动物鸣声在时域上不同组成部分的生物学意义差异尚无定论。无尾两栖类的鸣声一般由音节和间隔组成,如雄性仙琴蛙Babina daunchina的广告鸣叫由一至十余个音节及持续时间约为150 ms的间隔组成,这为研究不同音节生物学意义的差异提供了便利。本研究采用优化的失匹配负波(MMN)范式,在播放标准刺激(白噪声)和偏差刺激(同一个广告鸣叫的5个音节)时,采集脑电信号,经过叠加平均后得到MMN。结果显示,第一个音节对应的MMN幅度最高,而且具有大脑左侧优势。由于MMN幅度表征刺激与记忆痕迹之间的差异,同时反映投入的大脑资源,据此推测第一个音节在蛙类声音通讯中起至关重要的作用。  相似文献   

6.
目的:考察失匹配负波(Mismatch Negativity, MMN)相位差范式的波幅与刺激差异的关系,探求具有最大波幅的相位差,为提高MMN的效应量提供理论基础。方法:随机选取25名大学生,通过听觉oddball范式呈现刺激,其中标准刺激左右声道的相位差为0°,偏差刺激的相位差分别为2.63°、45°、90°、135°、180°;取Fz、FCz和Cz点的平均波幅进行重复测量方差分析。结果:剔除异常数据后剩余被试24人;刺激差异的主效应显著(P0.01),当相位差为180°、135°和90°时,波幅显著大于2.63°和45°(P0.05),但2.63°和45°之间差异不显著(P0.05),90°之后差异也不显著(P0.05)。结论:MMN相位差范式的波幅在0°~90°范围内随刺激差异增大而增大,之后趋于稳定;当相位差为180°时波幅最大,可能是该范式的最佳设置。  相似文献   

7.
目的:研究类别学习过程中,基于类别相似性的不同时间限制下被试分类学习脑激活机制.方法:通过对15名被试采用事件相关电位技术(ERPs),使用2相似程度不同(高相似vs低相似)×3呈现时间不同(10ms vs 200ms vs 600ms)的多因素实验设计,探讨不同相似程度及刺激呈现时间不同下的类别学习脑机制.结果:类别高相似的脑激活机制更加复杂,激活的脑区更多;刺激呈现时间为10ms的时候被试的波形图与200ms和600ms类似,但是其潜伏期更短,决策速度更快;三种时间条件下都激活了N400,可能存在语义加工.结论:类别学习在视觉加工阶段就已经发生;类别学习过程并非全或无的竞争机制,而是内隐和外显综合加工的结果.  相似文献   

8.
人类的时间知觉会受到情绪的调节发生主观扭曲,这种调节可以通过情绪的体验与预期来实现.本文区分了体验性情绪和预期性情绪调节时间知觉的作用方式和认知机制,基于标量计时理论,提出了在不同加工阶段情绪调节时间知觉的认知理论模型,并梳理了情绪调节时间知觉的神经生理学基础和脑机制方面的证据.未来研究需更关注预期性情绪对时间知觉的调节作用,考察注意、唤醒和效价等因素的交互影响,并进一步探究情绪调节时间知觉的神经机制.  相似文献   

9.
本研究探讨了中国人采用表达抑制和认知重评调节负性情绪的时间动态特征.被试采用自由观看、表达抑制和认知重评策略观看情绪图片,观看图片的同时采集事件相关电位活动(ERP).结果表明,被试在两种策略下自我报告的负性情绪水平有相似的降低.此外,表达抑制相比认知重评条件在额中央区诱发了更大的P3成分(340~480 ms).更重要的是,相比自由观看条件,表达抑制条件下在800~1000 ms,1000~1200 ms,1200~1400 ms及1400~1600 ms时间窗内晚期正电位(LPP)波幅均出现显著下降.相反,在认知重评条件下,除了在1400~1600 ms时间窗内LPP波幅相比自由观看显著降低;在其他时间窗口内认知重评与自由观看条件均没有显著差异.LPP波幅与负性情绪的评分存在正相关,而P3波幅预测了自我报告的表达抑制水平.这些结果提示,对中国被试而言,表达抑制比认知重评能更快地降低负性情绪唤起水平,但同时也消耗了更多的认知资源.  相似文献   

10.
人们对异族面孔的分类绩效比本族面孔更好,这称作异族分类优势.但面孔异族分类优势是否也和面孔异族效应一样受到情绪的调节作用,尚未可知.本文通过两个实验考察情绪对于面孔种族分类任务的影响.实验一要求40名大学生被试对不同表情面孔(中性,负性,正性)完成种族判断任务,结果发现,正性和负性情绪使年轻人对异族面孔分类变得更慢,削弱了面孔异族分类优势,而且正性情绪对异族分类效应影响更大.实验二以不同情绪加工特点的老年人为被试,发现老年人对负性异族面孔的分类绩效明显更差,负性情绪对异族分类优势的影响更大.这些结果为异族分类优势理论中,异族面孔分类和个体身份加工过程相互制约的潜在假设提供了进一步的支持.  相似文献   

11.
Differences in oscillatory responses to emotional facial expressions were studied in 40 subjects (19 men and 21 women aged from 18 to 30 years) varying in severity of depressive symptoms. Compared with perception of angry and neutral faces, perception of happy faces was accompanied by lower Δ synchronization in subjects with a low severity of depressive symptoms (Group 2) and higher Δ synchronization in subjects with a high severity of depressive symptoms (Group 1). Because synchronization of Δ oscillations is usually observed in aversive states, it was assumed that happy faces were perceived as negative stimuli by the Group 1 subjects. Perception of angry faces was accompanied by α desynchronization in Group 2 and α synchronization in Group 1. Based on Klimesch’s theory, the effect was assumed to indicate that the Group 1 subjects were initially set up for perception of negative emotional information. The effect of the emotional stimulus category was significant in Group 2 and nonsignificant in Group 1, testifying that the recognition of emotional information is hindered in depression-prone individuals.  相似文献   

12.
In a dual-task paradigm, participants performed a spatial location working memory task and a forced two-choice perceptual decision task (neutral vs. fearful) with gradually morphed emotional faces (neutral ∼ fearful). Task-irrelevant word distractors (negative, neutral, and control) were experimentally manipulated during spatial working memory encoding. We hypothesized that, if affective perception is influenced by concurrent cognitive load using a working memory task, task-irrelevant emotional distractors would bias subsequent perceptual decision-making on ambiguous facial expression. We found that when either neutral or negative emotional words were presented as task-irrelevant working-memory distractors, participants more frequently reported fearful face perception - but only at the higher emotional intensity levels of morphed faces. Also, the affective perception bias due to negative emotional distractors correlated with a decrease in working memory performance. Taken together, our findings suggest that concurrent working memory load by task-irrelevant distractors has an impact on affective perception of facial expressions.  相似文献   

13.
In the past few years, important contributions have been made to the study of emotional visual perception. Researchers have reported responses to emotional stimuli in the human amygdala under some unattended conditions (i.e. conditions in which the focus of attention was diverted away from the stimuli due to task instructions), during visual masking and during binocular suppression. Taken together, these results reveal the relative degree of autonomy of emotional processing. At the same time, however, important limitations to the notion of complete automaticity have been revealed. Effects of task context and attention have been shown, as well as large inter-subject differences in sensitivity to the detection of masked fearful faces (whereby briefly presented, target fearful faces are immediately followed by a neutral face that 'masks' the initial face). A better understanding of the neural basis of emotional perception and how it relates to visual attention and awareness is likely to require further refinement of the concepts of automaticity and awareness.  相似文献   

14.
Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant), but one was high-arousing (expressing anger) and the other low-arousing (expressing sadness). Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.  相似文献   

15.
The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations have not been demonstrated yet. In three experiments we investigated the influence of emotional voice as well as dynamic facial video adaptors on the perception of emotion-ambiguous voices morphed on an angry-to-happy continuum. Contrastive aftereffects were found for unimodal (voice) adaptation conditions, in that test voices were perceived as happier after adaptation to angry voices, and vice versa. Bimodal (voice + dynamic face) adaptors tended to elicit larger contrastive aftereffects. Importantly, crossmodal (dynamic face) adaptors also elicited substantial aftereffects in male, but not in female participants. Our results (1) support the idea of contrastive processing of emotions (2), show for the first time crossmodal adaptation effects under certain conditions, consistent with the idea that emotion processing is multimodal in nature, and (3) suggest gender differences in the sensory integration of facial and vocal emotional stimuli.  相似文献   

16.
Cognitive theories of depression posit that perception is negatively biased in depressive disorder. Previous studies have provided empirical evidence for this notion, but left open the question whether the negative perceptual bias reflects a stable trait or the current depressive state. Here we investigated the stability of negatively biased perception over time. Emotion perception was examined in patients with major depressive disorder (MDD) and healthy control participants in two experiments. In the first experiment subjective biases in the recognition of facial emotional expressions were assessed. Participants were presented with faces that were morphed between sad and neutral and happy expressions and had to decide whether the face was sad or happy. The second experiment assessed automatic emotion processing by measuring the potency of emotional faces to gain access to awareness using interocular suppression. A follow-up investigation using the same tests was performed three months later. In the emotion recognition task, patients with major depression showed a shift in the criterion for the differentiation between sad and happy faces: In comparison to healthy controls, patients with MDD required a greater intensity of the happy expression to recognize a face as happy. After three months, this negative perceptual bias was reduced in comparison to the control group. The reduction in negative perceptual bias correlated with the reduction of depressive symptoms. In contrast to previous work, we found no evidence for preferential access to awareness of sad vs. happy faces. Taken together, our results indicate that MDD-related perceptual biases in emotion recognition reflect the current clinical state rather than a stable depressive trait.  相似文献   

17.
Adults show reciprocal influences between the perception of gaze direction and emotional expression. These facilitate the understanding of facial signals, because the meaning of one cue can vary considerably depending on the value of the other. Here we ask whether children show similar reciprocal influences in the perception of gaze and expression. A previous study has demonstrated that gaze direction affects the perception of emotional expression in children. Here we demonstrate the opposite direction of influence, showing that expression affects the perception of gaze direction. Specifically, we show that the cone of gaze, i.e., range of gaze deviations perceived as direct, is larger for angry than neutral or fearful faces in 8 year-old children. Therefore, we conclude that children, like adults, show reciprocal influences in the perception of gaze and expression. An unexpected finding was that, compared with adults, children showed larger effects of expression on gaze perception. This finding raises the possibility that it is the ability to process cues independently, rather than sensitivity to combinations, that matures during development. Alternatively, children may be particularly sensitive to anger in adult faces.  相似文献   

18.
Appearance-based trustworthiness inferences may reflect the misinterpretation of emotional expression cues. Children and adults typically perceive faces that look happy to be relatively trustworthy and those that look angry to be relatively untrustworthy. Given reports of atypical expression perception in children with Autism Spectrum Disorder (ASD), the current study aimed to determine whether the modulation of trustworthiness judgments by emotional expression cues in children with ASD is also atypical. Cognitively-able children with and without ASD, aged 6–12 years, rated the trustworthiness of faces showing happy, angry and neutral expressions. Trust judgments in children with ASD were significantly modulated by overt happy and angry expressions, like those of typically-developing children. Furthermore, subtle emotion cues in neutral faces also influenced trust ratings of the children in both groups. These findings support a powerful influence of emotion cues on perceived trustworthiness, which even extends to children with social cognitive impairments.  相似文献   

19.
Emotive faces elicit neural responses even when they are not consciously perceived. We used faces hybridized from spatial frequency-filtered individual stimuli to study processing of facial emotion. Employing event-related functional magnetic resonance imaging (fMRI), we show enhanced fusiform cortex responses to hybrid faces containing fearful expressions when such emotional cues are present in the low-spatial frequency (LSF) range. Critically, this effect is independent of whether subjects use LSF or high-spatial frequency (HSF) information to make gender judgments on the hybridized faces. The magnitude of this fusiform enhancement predicts behavioral slowing in response times when participants report HSF information of the hybrid stimulus in the presence of fear in the unreported LSF components. Thus, emotional modulation of a face-responsive region of fusiform is driven by the low-frequency components of the stimulus, an effect independent of subjects' reported perception but evident in an incidental measure of behavioral performance.  相似文献   

20.
Cognitive neuroscience research on facial expression recognition and face evaluation has proliferated over the past 15 years. Nevertheless, large questions remain unanswered. In this overview, we discuss the current understanding in the field, and describe what is known and what remains unknown. In §2, we describe three types of behavioural evidence that the perception of traits in neutral faces is related to the perception of facial expressions, and may rely on the same mechanisms. In §3, we discuss cortical systems for the perception of facial expressions, and argue for a partial segregation of function in the superior temporal sulcus and the fusiform gyrus. In §4, we describe the current understanding of how the brain responds to emotionally neutral faces. To resolve some of the inconsistencies in the literature, we perform a large group analysis across three different studies, and argue that one parsimonious explanation of prior findings is that faces are coded in terms of their typicality. In §5, we discuss how these two lines of research--perception of emotional expressions and face evaluation--could be integrated into a common, cognitive neuroscience framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号