首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
A two-process probabilistic theory of emotion perception based on a non-linear combination of facial features is presented. Assuming that the upper and the lower part of the face function as the building blocks at the basis of emotion perception, an empirical test is provided with fear and happiness as target emotions. Subjects were presented with prototypical fearful and happy faces and with computer-generated chimerical expressions that were a combination of happy and fearful. Subjects were asked to indicate the emotions they perceive using an extensive list of emotions. We show that some emotions require a conjunction of the two halves of a face to be perceived, whereas for some other emotions only one half is sufficient. We demonstrate that chimerical faces give rise to the perception of genuine emotions. The findings provide evidence that different combinations of the two halves of a fearful and a happy face, either congruent or not, do generate the perception of emotions other than fear and happiness.  相似文献   

2.
East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar.  相似文献   

3.
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.  相似文献   

4.

Background

Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern.

Methodology/Principal Findings

Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face.

Conclusions/Significance

These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures.  相似文献   

5.

Background

Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved.

Methodology/Principal Findings

We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers'' fixations. Strikingly, in constrained Spotlight conditions (2° and 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region.

Conclusions/Significance

Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture.  相似文献   

6.
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.  相似文献   

7.
Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female) and 23 (11 male and 12 female) normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the eye movements during facial observations differed among participant groups. The deaf group looked at the eyes more frequently and for longer duration than the nose whereas the hearing group focused on the nose (or the central region of face) more than the eyes. These results suggest that the strategy employed to extract visual information when viewing static faces may differ between deaf and hearing people.  相似文献   

8.
Fu G  Hu CS  Wang Q  Quinn PC  Lee K 《PloS one》2012,7(6):e37688
It is well established that individuals show an other-race effect (ORE) in face recognition: they recognize own-race faces better than other-race faces. The present study tested the hypothesis that individuals would also scan own- and other-race faces differently. We asked Chinese participants to remember Chinese and Caucasian faces and we tested their memory of the faces over five testing blocks. The participants' eye movements were recorded with the use of an eye tracker. The data were analyzed with an Area of Interest approach using the key AOIs of a face (eyes, nose, and mouth). Also, we used the iMap toolbox to analyze the raw data of participants' fixation on each pixel of the entire face. Results from both types of analyses strongly supported the hypothesis. When viewing target Chinese or Caucasian faces, Chinese participants spent a significantly greater proportion of fixation time on the eyes of other-race Caucasian faces than the eyes of own-race Chinese faces. In contrast, they spent a significantly greater proportion of fixation time on the nose and mouth of Chinese faces than the nose and mouth of Caucasian faces. This pattern of differential fixation, for own- and other-race eyes and nose in particular, was consistent even as participants became increasingly familiar with the target faces of both races. The results could not be explained by the perceptual salience of the Chinese nose or Caucasian eyes because these features were not differentially salient across the races. Our results are discussed in terms of the facial morphological differences between Chinese and Caucasian faces and the enculturation of mutual gaze norms in East Asian cultures.  相似文献   

9.
In a dual-task paradigm, participants performed a spatial location working memory task and a forced two-choice perceptual decision task (neutral vs. fearful) with gradually morphed emotional faces (neutral ∼ fearful). Task-irrelevant word distractors (negative, neutral, and control) were experimentally manipulated during spatial working memory encoding. We hypothesized that, if affective perception is influenced by concurrent cognitive load using a working memory task, task-irrelevant emotional distractors would bias subsequent perceptual decision-making on ambiguous facial expression. We found that when either neutral or negative emotional words were presented as task-irrelevant working-memory distractors, participants more frequently reported fearful face perception - but only at the higher emotional intensity levels of morphed faces. Also, the affective perception bias due to negative emotional distractors correlated with a decrease in working memory performance. Taken together, our findings suggest that concurrent working memory load by task-irrelevant distractors has an impact on affective perception of facial expressions.  相似文献   

10.
E Scheller  C Büchel  M Gamer 《PloS one》2012,7(7):e41792
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.  相似文献   

11.
The current study explored the relationship between shyness and face scanning patterns for own- and other-race faces in adults. Participants completed a shyness inventory and a face recognition task in which their eye movements were recorded by a Tobii 1750 eye tracker. We found that: (1) Participants’ shyness scores were negatively correlated with the fixation proportion on the eyes, regardless of the race of face they viewed. The shyer the participants were, the less time they spent fixating on the eye region; (2) High shyness participants tended to fixate significantly more than low shyness participants on the regions just below the eyes as if to avoid direct eye contact; (3) When participants were recognizing own-race faces, their shyness scores were positively correlated with the normalized criterion. The shyer they were, the more apt they were to judge the faces as novel, regardless of whether they were target or foil faces. The present results support an avoidance hypothesis of shyness, suggesting that shy individuals tend to avoid directly fixating on others’ eyes, regardless of face race.  相似文献   

12.
Perceived age is a psychosocial factor that can influence both with whom and how we choose to interact socially. Though intuition tells us that a smile makes us look younger, surprisingly little empirical evidence exists to explain how age-irrelevant emotional expressions bias the subjective decision threshold for age. We examined the role that emotional expression plays in the process of judging one’s age from a face. College-aged participants were asked to sort the emotional and neutral expressions of male facial stimuli that had been morphed across eight age levels into categories of either “young” or “old.” Our results indicated that faces at the lower age levels were more likely to be categorized as old when they showed a sad facial expression compared to neutral expressions. Mirroring that, happy faces were more often judged as young at higher age levels than neutral faces. Our findings suggest that emotion interacts with age perception such that happy expression increases the threshold for an old decision, while sad expression decreases the threshold for an old decision in a young adult sample.  相似文献   

13.
The experiments described in this study were intended to increase our knowledge about social cognition in primates. Longtailed macaques (Macaca fascicularis) had to discriminate facial drawings of different emotional expressions. A new experimental approach was used. During the experimental sessions social interactions within the group were permitted, but the learning behaviour of individual monkeys was analysed. The procedure consisted of a simultaneous discrimination between four visual patterns under continuous reinforcement. It has implications not only for simple tasks of stimulus discrimination but also for complex problems of internal representations and visual communication. The monkeys learned quickly to discriminate faces of different emotional expressions. This discrimination ability was completely invariant with variations of colour, brightness, size, and rotation. Rotated and inverted faces were recognized perfectly. A preference test for particular features resulted in a graded estimation of particular facial components. Most important for face recognition was the outline, followed by the eye region and the mouth. An asymmetry in recognition of the left and right halves of the face was found. Further tests involving jumbled faces indicated that not only the presence of distinct facial cues but the specific relation of facial features is essential in recognizing faces. The experiment generally confirms that causal mechanisms of social cognition in non-human primates can be studied experimentally. The behavioural results are highly consistent with findings from neurophysiology and research with human subjects.  相似文献   

14.
Individuation and holistic processing of faces in rhesus monkeys   总被引:1,自引:0,他引:1  
Despite considerable evidence that neural activity in monkeys reflects various aspects of face perception, relatively little is known about monkeys' face processing abilities. Two characteristics of face processing observed in humans are a subordinate-level entry point, here, the default recognition of faces at the subordinate, rather than basic, level of categorization, and holistic effects, i.e. perception of facial displays as an integrated whole. The present study used an adaptation paradigm to test whether untrained rhesus macaques (Macaca mulatta) display these hallmarks of face processing. In experiments 1 and 2, macaques showed greater rebound from adaptation to conspecific faces than to other animals at the individual or subordinate level. In experiment 3, exchanging only the bottom half of a monkey face produced greater rebound in aligned than in misaligned composites, indicating that for normal, aligned faces, the new bottom half may have influenced the perception of the whole face. Scan path analysis supported this assertion: during rebound, fixation to the unchanged eye region was renewed, but only for aligned stimuli. These experiments show that macaques naturally display the distinguishing characteristics of face processing seen in humans and provide the first clear demonstration that holistic information guides scan paths for conspecific faces.  相似文献   

15.
Dogs exhibit characteristic looking patterns when looking at human faces but little is known about what the underlying cognitive mechanisms are and how much these are influenced by individual experience. In Experiment 1, seven dogs were trained in a simultaneous discrimination procedure to assess whether they could discriminate a) the owner''s face parts (eyes, nose or mouth) presented in isolation and b) whole faces where the same parts were covered. Dogs discriminated all the three parts of the owner''s face presented in isolation, but needed fewer sessions to reach the learning criterion for the eyes than for both nose and mouth. Moreover, covering the eyes region significantly disrupted face discriminability compared to the whole face condition while such difference was not found when the nose or mouth was hidden. In Experiment 2, dogs were presented with manipulated images of the owner''s face (inverted, blurred, scrambled, grey-scale) to test the relative contribution of part-based and configural processing in the discrimination of human faces. Furthermore, by comparing the dogs enrolled in the previous experiment and seven ‘naïve’ dogs we examined if the relative contribution of part-based and configural processing was affected by dogs'' experience with the face stimuli. Naïve dogs discriminated the owner only when configural information was provided, whereas expert dogs could discriminate the owner also when part-based processing was necessary. The present study provides the first evidence that dogs can discriminate isolated internal features of a human face and corroborate previous reports of salience of the eyes region for human face processing. Although the reliance on part-perception may be increased by specific experience, our findings suggest that human face discrimination by dogs relies mainly on configural rather than on part-based elaboration.  相似文献   

16.
Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates.  相似文献   

17.
It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information.  相似文献   

18.
Previous research has demonstrated that the way human adults look at others’ faces is modulated by their cultural background, but very little is known about how such a culture-specific pattern of face gaze develops. The current study investigated the role of cultural background on the development of face scanning in young children between the ages of 1 and 7 years, and its modulation by the eye gaze direction of the face. British and Japanese participants’ eye movements were recorded while they observed faces moving their eyes towards or away from the participants. British children fixated more on the mouth whereas Japanese children fixated more on the eyes, replicating the results with adult participants. No cultural differences were observed in the differential responses to direct and averted gaze. The results suggest that different patterns of face scanning exist between different cultures from the first years of life, but differential scanning of direct and averted gaze associated with different cultural norms develop later in life.  相似文献   

19.

Background

Humans detect faces with direct gazes among those with averted gazes more efficiently than they detect faces with averted gazes among those with direct gazes. We examined whether this “stare-in-the-crowd” effect occurs in chimpanzees (Pan troglodytes), whose eye morphology differs from that of humans (i.e., low-contrast eyes, dark sclera).

Methodology/Principal Findings

An adult female chimpanzee was trained to search for an odd-item target (front view of a human face) among distractors that differed from the target only with respect to the direction of the eye gaze. During visual-search testing, she performed more efficiently when the target was a direct-gaze face than when it was an averted-gaze face. This direct-gaze superiority was maintained when the faces were inverted and when parts of the face were scrambled. Subsequent tests revealed that gaze perception in the chimpanzee was controlled by the contrast between iris and sclera, as in humans, but that the chimpanzee attended only to the position of the iris in the eye, irrespective of head direction.

Conclusion/Significance

These results suggest that the chimpanzee can discriminate among human gaze directions and are more sensitive to direct gazes. However, limitations in the perception of human gaze by the chimpanzee are suggested by her inability to completely transfer her performance to faces showing a three-quarter view.  相似文献   

20.
Cognitive theories of depression posit that perception is negatively biased in depressive disorder. Previous studies have provided empirical evidence for this notion, but left open the question whether the negative perceptual bias reflects a stable trait or the current depressive state. Here we investigated the stability of negatively biased perception over time. Emotion perception was examined in patients with major depressive disorder (MDD) and healthy control participants in two experiments. In the first experiment subjective biases in the recognition of facial emotional expressions were assessed. Participants were presented with faces that were morphed between sad and neutral and happy expressions and had to decide whether the face was sad or happy. The second experiment assessed automatic emotion processing by measuring the potency of emotional faces to gain access to awareness using interocular suppression. A follow-up investigation using the same tests was performed three months later. In the emotion recognition task, patients with major depression showed a shift in the criterion for the differentiation between sad and happy faces: In comparison to healthy controls, patients with MDD required a greater intensity of the happy expression to recognize a face as happy. After three months, this negative perceptual bias was reduced in comparison to the control group. The reduction in negative perceptual bias correlated with the reduction of depressive symptoms. In contrast to previous work, we found no evidence for preferential access to awareness of sad vs. happy faces. Taken together, our results indicate that MDD-related perceptual biases in emotion recognition reflect the current clinical state rather than a stable depressive trait.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号