首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the review of modern data and ideas concerning the neurophysiological mechanisms and morphological foundations of the most essential communicative function of humans and monkeys, that of recognition of faces and their emotional expressions, the attention is focussed on its dynamic realization and structural provision. On the basis of literature data about hemodynamic and metabolic mapping of the brain the author analyses the role of different zones of the ventral and dorsal visual cortical pathway, the frontal neocortex and amigdala in the facial features processing, as well as the specificity of this processing at each level. Special attention is given to the module principle of the facial processing in the temporal cortex. The dynamic characteristics of facial recognition are discussed according to the electrical evoked response data in healthy and disease humans and monkeys. Modern evidences on the role of different brain structures in the generation of successive evoked response waves in connection with successive stages of facial processing are analyzed. The similarity and differences between mechanisms of recognition of faces and their emotional expression are also considered.  相似文献   

2.
Faces convey a wealth of social signals. A dominant view in face-perception research has been that the recognition of facial identity and facial expression involves separable visual pathways at the functional and neural levels, and data from experimental, neuropsychological, functional imaging and cell-recording studies are commonly interpreted within this framework. However, the existing evidence supports this model less strongly than is often assumed. Alongside this two-pathway framework, other possible models of facial identity and expression recognition, including one that has emerged from principal component analysis techniques, should be considered.  相似文献   

3.
4.
Influence of additional working memory load on emotional face recognition was studied in healthy adults. Visual set to emotional face expression was experimentally formed, and two types of additional task--visual-spatial or semantic--were embedded in the experiment. Additional task caused less plastic set, i.e., a slower set-shifting. This effect displayed itself in an increase of erroneous facial expression perceptions. The character of these erroneous perceptions (assimilative or contrast or visual illusions) depended on the type of the additional task. Pre-stimulus EEG coherence across experimental trials in theta (4-7), low alpha (8-10 Hz) and beta (14--20) bands was analysed. Data of low-alpha and beta-coherence supported the hypothesis that increased memory load caused less involvement of frontal lobes in selective attention mechanisms that are associated with set-forming. This results in a slower set-shifting. Increased memory load also led to a growth of theta-band coherence in the left hemisphere and its decrease in the right hemisphere. The account of theta-coherence decrease in the right hemisphere between prefrontal and temporal areas for a slower set-shifting is discussed.  相似文献   

5.

Background  

Facial expressions are important in facilitating human communication and interactions. Also, they are used as an important tool in behavioural studies and in medical rehabilitation. Facial image based mood detection techniques may provide a fast and practical approach for non-invasive mood detection. The purpose of the present study was to develop an intelligent system for facial image based expression classification using committee neural networks.  相似文献   

6.
A correlation between some characteristics of the visual evoked potentials and individual personality traits (by the Kettell scale) was revealed in 40 healthy subjects when they recognized facial expressions of anger and fear. As compared to emotionally stable subjects, emotionally unstable subjects had shorter latencies of evoked potentials and suppressed late negativity in the occipital and temporal areas. In contrast, amplitude of these waves in the frontal areas was increased. In emotionally stable group of subjects differences in the evoked potentials related to emotional expressions were evident throughout the whole signal processing beginning from the early sensory stage (P1 wave). In emotionally unstable group differences in the evoked potentials related to recognized emotional expressions developed later. Sensitivity of the evoked potentials to emotional salience of faces was also more pronounced in the emotionally stable group. The involvement of the frontal cortex, amygdala, and the anterior cingulate cortex in the development of individual features of recognition of facial expressions of anger and fear is discussed.  相似文献   

7.
Benson & Perrett''s (1991 b) computer-based caricature procedure was used to alter the positions of anatomical landmarks in photographs of emotional facial expressions with respect to their locations in a reference norm face (e.g. a neutral expression). Exaggerating the differences between an expression and its norm produces caricatured images, whereas reducing the differences produces ''anti-caricatures''. Experiment 1 showed that caricatured (+50% different from neutral) expressions were recognized significantly faster than the veridical (0%, undistorted) expressions. This held for all six basic emotions from the Ekman & Friesen (1976) series, and the effect generalized across different posers. For experiment 2, caricatured (+50%) and anti-caricatured (-50%) images were prepared using two types of reference norm; a neutral-expression norm, which would be optimal if facial expression recognition involves monitoring changes in the positioning of underlying facial muscles, and a perceptually-based norm involving an average of the expressions of six basic emotions (excluding neutral) in the Ekman & Friesen (1976) series. The results showed that the caricatured images were identified significantly faster, and the anti-caricatured images significantly slower, than the veridical expressions. Furthermore, the neutral-expression and average-expression norm caricatures produced the same pattern of results.  相似文献   

8.
In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.  相似文献   

9.
Effect of previous experience on the function of recognition of emotional facial expression was studied with the model of an unconscious visual set. It was found that repeated perception of pictures of face with angry expression caused a substantial effect on subsequent recognition of emotional facial expression. Recognition could be distorted, and expression of the face with "neutral" expression could be erroneously perceived as emotionally negative. Both contrast and assimilative illusions were observed. Evidence is presented that the described effect is the result of the set formation to emotional facial expression. The involvement of the prefrontal cortex into the structural-functional system of facial expression recognition is discussed. Kettel's test revealed significant correlations between the factor of rigidity of the set to emotional facial expression and the scores of personality traits such as social boldness--shyness on the H Scale, on the one hand, and the level of anxiety on the other.  相似文献   

10.
11.
PD Ross  L Polson  MH Grosbras 《PloS one》2012,7(9):e44815
To date, research on the development of emotion recognition has been dominated by studies on facial expression interpretation; very little is known about children's ability to recognize affective meaning from body movements. In the present study, we acquired simultaneous video and motion capture recordings of two actors portraying four basic emotions (Happiness Sadness, Fear and Anger). One hundred and seven primary and secondary school children (aged 4-17) and 14 adult volunteers participated in the study. Each participant viewed the full-light and point-light video clips and was asked to make a forced-choice as to which emotion was being portrayed. As a group, children performed worse than adults for both point-light and full-light conditions. Linear regression showed that both age and lighting condition were significant predictors of performance in children. Using piecewise regression, we found that a bilinear model with a steep improvement in performance until 8.5 years of age, followed by a much slower improvement rate through late childhood and adolescence best explained the data. These findings confirm that, like for facial expression, adolescents' recognition of basic emotions from body language is not fully mature and seems to follow a non-linear development. This is in line with observations of non-linear developmental trajectories for different aspects of human stimuli processing (voices and faces), perhaps suggesting a shift from one perceptual or cognitive strategy to another during adolescence. These results have important implications to understanding the maturation of social cognition.  相似文献   

12.
Lin MT  Huang KH  Huang CL  Huang YJ  Tsai GE  Lane HY 《PloS one》2012,7(4):e36143

Background

Facial emotion perception is a major social skill, but its molecular signal pathway remains unclear. The MET/AKT cascade affects neurodevelopment in general populations and face recognition in patients with autism. This study explores the possible role of MET/AKT cascade in facial emotion perception.

Methods

One hundred and eighty two unrelated healthy volunteers (82 men and 100 women) were recruited. Four single nucleotide polymorphisms (SNP) of MET (rs2237717, rs41735, rs42336, and rs1858830) and AKT rs1130233 were genotyped and tested for their effects on facial emotion perception. Facial emotion perception was assessed by the face task of Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Thorough neurocognitive functions were also assessed.

Results

Regarding MET rs2237717, individuals with the CT genotype performed better in facial emotion perception than those with TT (p = 0.016 by ANOVA, 0.018 by general linear regression model [GLM] to control for age, gender, and education duration), and showed no difference with those with CC. Carriers with the most common MET CGA haplotype (frequency = 50.5%) performed better than non-carriers of CGA in facial emotion perception (p = 0.018, df = 1, F = 5.69, p = 0.009 by GLM). In MET rs2237717/AKT rs1130233 interaction, the C carrier/G carrier group showed better facial emotion perception than those with the TT/AA genotype (p = 0.035 by ANOVA, 0.015 by GLM), even when neurocognitive functions were controlled (p = 0.046 by GLM).

Conclusions

To our knowledge, this is the first study to suggest that genetic factors can affect performance of facial emotion perception. The findings indicate that MET variances and MET/AKT interaction may affect facial emotion perception, implicating that the MET/AKT cascade plays a significant role in facial emotion perception. Further replication studies are needed.  相似文献   

13.
《IRBM》2014,35(3):109-118
Several approaches have been proposed to recognize human emotions based on facial expressions or physiological signals, relatively rare work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. In this paper, we propose two methods based on feature-level and decision-level to fuse facial and physiological modalities. At feature-level fusion, we have tested the mutual information approach for selecting the most relevant and principal component analysis to reduce their dimensionality. For decision-level fusion, we have implemented two methods; the first is based on voting process and the second is based on dynamic Bayesian Networks. The system is validated using data obtained through an emotion elicitation experiment based on the International Affective Picture System. Results show that feature-level fusion is better than decision-level fusion.  相似文献   

14.
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.  相似文献   

15.
16.
17.
Rapid identification of facial expressions can profoundly affect social interactions, yet most research to date has focused on static rather than dynamic expressions. In four experiments, we show that when a non-expressive face becomes expressive, happiness is detected more rapidly anger. When the change occurs peripheral to the focus of attention, however, dynamic anger is better detected when it appears in the left visual field (LVF), whereas dynamic happiness is better detected in the right visual field (RVF), consistent with hemispheric differences in the processing of approach- and avoidance-relevant stimuli. The central advantage for happiness is nevertheless the more robust effect, persisting even when information of either high or low spatial frequency is eliminated. Indeed, a survey of past research on the visual search for emotional expressions finds better support for a happiness detection advantage, and the explanation may lie in the coevolution of the signal and the receiver.  相似文献   

18.
Pell MD  Kotz SA 《PloS one》2011,6(11):e27256
How quickly do listeners recognize emotions from a speaker''s voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.  相似文献   

19.
Lee TH  Choi JS  Cho YS 《PloS one》2012,7(3):e32987

Background

Certain facial configurations are believed to be associated with distinct affective meanings (i.e. basic facial expressions), and such associations are common across cultures (i.e. universality of facial expressions). However, recently, many studies suggest that various types of contextual information, rather than facial configuration itself, are important factor for facial emotion perception.

Methodology/Principal Findings

To examine systematically how contextual information influences individuals’ facial emotion perception, the present study estimated direct observers’ perceptual thresholds for detecting negative facial expressions via a forced-choice psychophysical procedure using faces embedded in various emotional contexts. We additionally measured the individual differences in affective information-processing tendency (BIS/BAS) as a possible factor that may determine the extent to which contextual information on facial emotion perception is used. It was found that contextual information influenced observers'' perceptual thresholds for facial emotion. Importantly, individuals’ affective-information tendencies modulated the extent to which they incorporated context information into their facial emotion perceptions.

Conclusions/Significance

The findings of this study suggest that facial emotion perception not only depends on facial configuration, but the context in which the face appears as well. This contextual influence appeared differently with individual’s characteristics of information processing. In summary, we conclude that individual character traits, as well as facial configuration and the context in which a face appears, need to be taken into consideration regarding facial emotional perception.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号