首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Dogs exhibit characteristic looking patterns when looking at human faces but little is known about what the underlying cognitive mechanisms are and how much these are influenced by individual experience. In Experiment 1, seven dogs were trained in a simultaneous discrimination procedure to assess whether they could discriminate a) the owner''s face parts (eyes, nose or mouth) presented in isolation and b) whole faces where the same parts were covered. Dogs discriminated all the three parts of the owner''s face presented in isolation, but needed fewer sessions to reach the learning criterion for the eyes than for both nose and mouth. Moreover, covering the eyes region significantly disrupted face discriminability compared to the whole face condition while such difference was not found when the nose or mouth was hidden. In Experiment 2, dogs were presented with manipulated images of the owner''s face (inverted, blurred, scrambled, grey-scale) to test the relative contribution of part-based and configural processing in the discrimination of human faces. Furthermore, by comparing the dogs enrolled in the previous experiment and seven ‘naïve’ dogs we examined if the relative contribution of part-based and configural processing was affected by dogs'' experience with the face stimuli. Naïve dogs discriminated the owner only when configural information was provided, whereas expert dogs could discriminate the owner also when part-based processing was necessary. The present study provides the first evidence that dogs can discriminate isolated internal features of a human face and corroborate previous reports of salience of the eyes region for human face processing. Although the reliance on part-perception may be increased by specific experience, our findings suggest that human face discrimination by dogs relies mainly on configural rather than on part-based elaboration.  相似文献   

2.
J Zhang  X Li  Y Song  J Liu 《PloS one》2012,7(7):e40390
Numerous studies with functional magnetic resonance imaging have shown that the fusiform face area (FFA) in the human brain plays a key role in face perception. Recent studies have found that both the featural information of faces (e.g., eyes, nose, and mouth) and the configural information of faces (i.e., spatial relation among features) are encoded in the FFA. However, little is known about whether the featural information is encoded independent of or combined with the configural information in the FFA. Here we used multi-voxel pattern analysis to examine holistic representation of faces in the FFA by correlating spatial patterns of activation with behavioral performance in discriminating face parts with face configurations either present or absent. Behaviorally, the absence of face configurations (versus presence) impaired discrimination of face parts, suggesting a holistic representation in the brain. Neurally, spatial patterns of activation in the FFA were more similar among correct than incorrect trials only when face parts were presented in a veridical face configuration. In contrast, spatial patterns of activation in the occipital face area, as well as the object-selective lateral occipital complex, were more similar among correct than incorrect trials regardless of the presence of veridical face configurations. This finding suggests that in the FFA faces are represented not on the basis of individual parts but in terms of the whole that emerges from the parts.  相似文献   

3.
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.  相似文献   

4.
Tabak JA  Zayas V 《PloS one》2012,7(5):e36671
Research has shown that people are able to judge sexual orientation from faces with above-chance accuracy, but little is known about how these judgments are formed. Here, we investigated the importance of well-established face processing mechanisms in such judgments: featural processing (e.g., an eye) and configural processing (e.g., spatial distance between eyes). Participants judged sexual orientation from faces presented for 50 milliseconds either upright, which recruits both configural and featural processing, or upside-down, when configural processing is strongly impaired and featural processing remains relatively intact. Although participants judged women's and men's sexual orientation with above-chance accuracy for upright faces and for upside-down faces, accuracy for upside-down faces was significantly reduced. The reduced judgment accuracy for upside-down faces indicates that configural face processing significantly contributes to accurate snap judgments of sexual orientation.  相似文献   

5.
Balas BJ  Sinha P 《Spatial Vision》2007,21(1-2):119-135
Configural information has long been considered important for face recognition. However, traditional portraiture instruction encourages the artist to use a 'generic' configuration for faces rather than attempting to replicate precise feature positions. We examine this intriguing paradox with two tasks designed to test the extent to which configural information is incorporated into face representations. In Experiment 1, we use a simplified face production task to examine how accurately feature configuration can be incorporated in the generated likenesses. In Experiment 2, we ask if the 'portraits' created in Experiment 1 are discriminable from veridical images. The production and recognition results from these experiments show a consistent pattern. Subjects are quite poor at arranging facial features (eyes, nose and mouth) in their correct locations, and at distinguishing erroneous configurations from correct ones. This seeming insensitivity to configural relations is consistent with artists' practice of creating portraits based on a generic geometric template. Interestingly, the frame of reference artists implicitly use for this generic template - the external face contour - emerges as a significant modulator of performance in our experimental results. Production errors are reduced and recognition performance is enhanced in the presence of outer contours. We discuss the implications of these results for face recognition models, as well as some possible perceptual reasons why portraits are so difficult to create.  相似文献   

6.
Recognition and individuation of conspecifics by their face is essential for primate social cognition. This ability is driven by a mechanism that integrates the appearance of facial features with subtle variations in their configuration (i.e., second-order relational properties) into a holistic representation. So far, there is little evidence of whether our evolutionary ancestors show sensitivity to featural spatial relations and hence holistic processing of faces as shown in humans. Here, we directly compared macaques with humans in their sensitivity to configurally altered faces in upright and inverted orientations using a habituation paradigm and eye tracking technologies. In addition, we tested for differences in processing of conspecific faces (human faces for humans, macaque faces for macaques) and non-conspecific faces, addressing aspects of perceptual expertise. In both species, we found sensitivity to second-order relational properties for conspecific (expert) faces, when presented in upright, not in inverted, orientation. This shows that macaques possess the requirements for holistic processing, and thus show similar face processing to that of humans.  相似文献   

7.
Atypical face processing plays a key role in social interaction difficulties encountered by individuals with autism. In the current fMRI study, the Thatcher illusion was used to investigate several aspects of face processing in 20 young adults with high-functioning autism spectrum disorder (ASD) and 20 matched neurotypical controls. “Thatcherized” stimuli were modified at either the eyes or the mouth and participants discriminated between pairs of faces while cued to attend to either of these features in upright and inverted orientation. Behavioral data confirmed sensitivity to the illusion and intact configural processing in ASD. Directing attention towards the eyes vs. the mouth in upright faces in ASD led to (1) improved discrimination accuracy; (2) increased activation in areas involved in social and emotional processing; (3) increased activation in subcortical face-processing areas. Our findings show that when explicitly cued to attend to the eyes, activation of cortical areas involved in face processing, including its social and emotional aspects, can be enhanced in autism. This suggests that impairments in face processing in autism may be caused by a deficit in social attention, and that giving specific cues to attend to the eye-region when performing behavioral therapies aimed at improving social skills may result in a better outcome.  相似文献   

8.
A previous experiment showed that a chimpanzee performed better in searching for a target human face that differed in orientation from distractors when the target had an upright orientation than when targets had inverted or horizontal orientation [Tomonaga (1999a) Primate Res 15:215–229]. This upright superiority effect was also seen when using chimpanzee faces as targets but not when using photographs of a house. The present study sought to extend these results and explore factors affecting the face-specific upright superiority effect. Upright superiority was shown in a visual search for orientation when caricaturized human faces and dog faces were used as stimuli for the chimpanzee but not when shapes of a hand and chairs were presented. Thus, the configural properties of facial features, which cause an inversion effect in face recognition in humans and chimpanzees, were thought to be a source of the upright superiority effect in the visual search process. To examine this possibility, various stimuli manipulations were introduced in subsequent experiments. The results clearly show that the configuration of facial features plays a critical role in the upright superiority effect, and strongly suggest similarity in face processing in humans and chimpanzees.  相似文献   

9.
This study investigated schematic face preferences in infant macaque monkeys. We also examined the roles of whole and partial features in facial recognition and related developmental change. Sixteen infant monkeys, all less than two months old, were presented with two stimulus pairs. Pair A consisted of "face" and "parts," with the components representing facial parts (i.e. eyes, mouth, and nose). Pair B consisted of "configuration" and "linear," each including three black squares. In each pair, one of two stimuli represented a facial configuration, namely "face" and "configuration." Visual following responses toward each stimulus were analyzed. The results revealed an early preference for schematic faces in these nonhuman primates. Infants less than one month of age showed a preference only for a stimulus that contained only whole facial configuration (i.e. "configuration" in Pair B). One-month-old macaque infants showed a preference only for "face" but not for "configuration." This result means that their preference at that age was affected by both the shape of the components and the overall configuration. As the developmental change and the contribution of both facial features are similar to those in human infants, it may suggest that primates share common cognitive processes in early schematic face recognition.  相似文献   

10.
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.  相似文献   

11.
‘Infant shyness’, in which infants react shyly to adult strangers, presents during the third quarter of the first year. Researchers claim that shy children over the age of three years are experiencing approach-avoidance conflicts. Counter-intuitively, shy children do not avoid the eyes when scanning faces; rather, they spend more time looking at the eye region than non-shy children do. It is currently unknown whether young infants show this conflicted shyness and its corresponding characteristic pattern of face scanning. Here, using infant behavioral questionnaires and an eye-tracking system, we found that highly shy infants had high scores for both approach and fear temperaments (i.e., approach-avoidance conflict) and that they showed longer dwell times in the eye regions than less shy infants during their initial fixations to facial stimuli. This initial hypersensitivity to the eyes was independent of whether the viewed faces were of their mothers or strangers. Moreover, highly shy infants preferred strangers with an averted gaze and face to strangers with a directed gaze and face. This initial scanning of the eye region and the overall preference for averted gaze faces were not explained solely by the infants’ age or temperament (i.e., approach or fear). We suggest that infant shyness involves a conflict in temperament between the desire to approach and the fear of strangers, and this conflict is the psychological mechanism underlying infants’ characteristic behavior in face scanning.  相似文献   

12.
Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer''s own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.  相似文献   

13.
Fu G  Hu CS  Wang Q  Quinn PC  Lee K 《PloS one》2012,7(6):e37688
It is well established that individuals show an other-race effect (ORE) in face recognition: they recognize own-race faces better than other-race faces. The present study tested the hypothesis that individuals would also scan own- and other-race faces differently. We asked Chinese participants to remember Chinese and Caucasian faces and we tested their memory of the faces over five testing blocks. The participants' eye movements were recorded with the use of an eye tracker. The data were analyzed with an Area of Interest approach using the key AOIs of a face (eyes, nose, and mouth). Also, we used the iMap toolbox to analyze the raw data of participants' fixation on each pixel of the entire face. Results from both types of analyses strongly supported the hypothesis. When viewing target Chinese or Caucasian faces, Chinese participants spent a significantly greater proportion of fixation time on the eyes of other-race Caucasian faces than the eyes of own-race Chinese faces. In contrast, they spent a significantly greater proportion of fixation time on the nose and mouth of Chinese faces than the nose and mouth of Caucasian faces. This pattern of differential fixation, for own- and other-race eyes and nose in particular, was consistent even as participants became increasingly familiar with the target faces of both races. The results could not be explained by the perceptual salience of the Chinese nose or Caucasian eyes because these features were not differentially salient across the races. Our results are discussed in terms of the facial morphological differences between Chinese and Caucasian faces and the enculturation of mutual gaze norms in East Asian cultures.  相似文献   

14.
Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements.  相似文献   

15.
Human observers are remarkably proficient at recognizing expressions of emotions and at readily grouping them into distinct categories. When morphing one facial expression into another, the linear changes in low-level features are insufficient to describe the changes in perception, which instead follow an s-shaped function. Important questions are, whether there are single diagnostic regions in the face that drive categorical perception for certain parings of emotion expressions, and how information in those regions interacts when presented together. We report results from two experiments with morphed fear-anger expressions, where (a) half of the face was masked or (b) composite faces made up of different expressions were presented. When isolated upper and lower halves of faces were shown, the eyes were found to be almost as diagnostic as the whole face, with the response function showing a steep category boundary. In contrast, the mouth allowed for a substantially lesser amount of accuracy and responses followed a much flatter psychometric function. When a composite face consisting of mismatched upper and lower halves was used and observers were instructed to exclusively judge either the expression of mouth or eyes, the to-be-ignored part always influenced perception of the target region. In line with experiment 1, the eye region exerted a much stronger influence on mouth judgements than vice versa. Again, categorical perception was significantly more pronounced for upper halves of faces. The present study shows that identification of fear and anger in morphed faces relies heavily on information from the upper half of the face, most likely the eye region. Categorical perception is possible when only the upper face half is present, but compromised when only the lower part is shown. Moreover, observers tend to integrate all available features of a face, even when trying to focus on only one part.  相似文献   

16.
Although a growing number of empirical studies have revealed that activating mate-related motives might exert a specific set of consequences for human cognition and behaviors, such as attention and memory, little is known about whether mate-related motives affect self-regulated learning. The present study examined the effects of mate-related motives (mate-search and mate-guarding) on study-time allocation to faces varying in attractiveness. In two experiments, participants in mate-related priming conditions (Experiment 1: mate-search; Experiment 2: mate-guarding) or control conditions studied 20 female faces (10 highly attractive, 10 less attractive) during a self-paced study task, and then were given a yes/no face recognition task. The finding of Experiment 1 showed that activating a mate-search motive led the male participants to allocate more time to highly attractive female faces (i.e., perceived potential mates) than to less attractive ones. In Experiment 2, female participants in the mate-guarding priming condition spent more time studying highly attractive female faces (i.e., perceived potential rivals) than less attractive ones, compared to participants in the control condition. These findings illustrate the highly specific consequences of mate-related motives on study-time allocation, and highlight the value of exploring human cognition and motivation within evolutionary and self-regulated learning frameworks.  相似文献   

17.
Stein T  Peelen MV  Sterzer P 《PloS one》2011,6(12):e29361
From the first days of life, humans preferentially orient towards upright faces, likely reflecting innate subcortical mechanisms. Here, we show that binocular rivalry can reveal face detection mechanisms in adults that are surprisingly similar to inborn face detection mechanism. We used continuous flash suppression (CFS), a variant of binocular rivalry, to render stimuli invisible at the beginning of each trial and measured the time upright and inverted stimuli needed to overcome such interocular suppression. Critically, specific stimulus properties previously shown to modulate looking preferences in neonates similarly modulated adults' awareness of faces presented during CFS. First, the advantage of upright faces in overcoming CFS was strongly modulated by contrast polarity and direction of illumination. Second, schematic patterns consisting of three dark blobs were suppressed for shorter durations when the arrangement of these blobs respected the face-like configuration of the eyes and the mouth, and this effect was modulated by contrast polarity. No such effects were obtained in a binocular control experiment not involving CFS, suggesting a crucial role for face-sensitive mechanisms operating outside of conscious awareness. These findings indicate that visual awareness of faces in adults is governed by perceptual mechanisms that are sensitive to similar stimulus properties as those modulating newborns' face preferences.  相似文献   

18.
A key to understanding visual cognition is to determine when, how, and with what information the human brain distinguishes between visual categories. So far, the dynamics of information processing for categorization of visual stimuli has not been elucidated. By using an ecologically important categorization task (seven expressions of emotion), we demonstrate, in three human observers, that an early brain event (the N170 Event Related Potential, occurring 170 ms after stimulus onset) integrates visual information specific to each expression, according to a pattern. Specifically, starting 50 ms prior to the ERP peak, facial information tends to be integrated from the eyes downward in the face. This integration stops, and the ERP peaks, when the information diagnostic for judging a particular expression has been integrated (e.g., the eyes in fear, the corners of the nose in disgust, or the mouth in happiness). Consequently, the duration of information integration from the eyes down determines the latency of the N170 for each expression (e.g., with "fear" being faster than "disgust," itself faster than "happy"). For the first time in visual categorization, we relate the dynamics of an important brain event to the dynamics of a precise information-processing function.  相似文献   

19.
There is a growing body of literature to show that color can convey information, owing to its emotionally meaningful associations. Most research so far has focused on negative hue–meaning associations (e.g., red) with the exception of the positive aspects associated with green. We therefore set out to investigate the positive associations of two colors (i.e., green and pink), using an emotional facial expression recognition task in which colors provided the emotional contextual information for the face processing. In two experiments, green and pink backgrounds enhanced happy face recognition and impaired sad face recognition, compared with a control color (gray). Our findings therefore suggest that because green and pink both convey positive information, they facilitate the processing of emotionally congruent facial expressions (i.e., faces expressing happiness) and interfere with that of incongruent facial expressions (i.e., faces expressing sadness). Data also revealed a positive association for white. Results are discussed within the theoretical framework of emotional cue processing and color meaning.  相似文献   

20.
E Scheller  C Büchel  M Gamer 《PloS one》2012,7(7):e41792
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号