首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
‘Infant shyness’, in which infants react shyly to adult strangers, presents during the third quarter of the first year. Researchers claim that shy children over the age of three years are experiencing approach-avoidance conflicts. Counter-intuitively, shy children do not avoid the eyes when scanning faces; rather, they spend more time looking at the eye region than non-shy children do. It is currently unknown whether young infants show this conflicted shyness and its corresponding characteristic pattern of face scanning. Here, using infant behavioral questionnaires and an eye-tracking system, we found that highly shy infants had high scores for both approach and fear temperaments (i.e., approach-avoidance conflict) and that they showed longer dwell times in the eye regions than less shy infants during their initial fixations to facial stimuli. This initial hypersensitivity to the eyes was independent of whether the viewed faces were of their mothers or strangers. Moreover, highly shy infants preferred strangers with an averted gaze and face to strangers with a directed gaze and face. This initial scanning of the eye region and the overall preference for averted gaze faces were not explained solely by the infants’ age or temperament (i.e., approach or fear). We suggest that infant shyness involves a conflict in temperament between the desire to approach and the fear of strangers, and this conflict is the psychological mechanism underlying infants’ characteristic behavior in face scanning.  相似文献   

2.
Previous research has demonstrated that the way human adults look at others’ faces is modulated by their cultural background, but very little is known about how such a culture-specific pattern of face gaze develops. The current study investigated the role of cultural background on the development of face scanning in young children between the ages of 1 and 7 years, and its modulation by the eye gaze direction of the face. British and Japanese participants’ eye movements were recorded while they observed faces moving their eyes towards or away from the participants. British children fixated more on the mouth whereas Japanese children fixated more on the eyes, replicating the results with adult participants. No cultural differences were observed in the differential responses to direct and averted gaze. The results suggest that different patterns of face scanning exist between different cultures from the first years of life, but differential scanning of direct and averted gaze associated with different cultural norms develop later in life.  相似文献   

3.
Research in both infants and adults demonstrated that attachment expectations are associated with the attentional processing of attachment-related information. However, this research suffered from methodological issues and has not been validated across ages. Employing a more ecologically valid paradigm to measure attentional processes by virtue of eye tracking, the current study tested the defensive exclusion hypothesis in late childhood. According to this hypothesis, insecurely attached children are assumed to defensively exclude attachment-related information. We hypothesized that securely attached children process attachment- related neutral and emotional information in a more open manner compared to insecurely attached children. Sixty-two children (59.7% girls, 8–12 years) completed two different tasks, while eye movements were recorded: task one presented an array of neutral faces including mother and unfamiliar women and task two presented the same with happy and angry faces. Results indicated that more securely attached children looked longer at mother’s face regardless of the emotional expression. Also, they tend to have more maintained attention to mother’s neutral face. Furthermore, more attachment avoidance was related to a reduced total viewing time of mother’s neutral, happy, and angry face. Attachment anxiety was not consistently related to the processing of mother’s face. Findings support the theoretical assumption that securely attached children have an open manner of processing all attachment-related information.  相似文献   

4.
Fu G  Hu CS  Wang Q  Quinn PC  Lee K 《PloS one》2012,7(6):e37688
It is well established that individuals show an other-race effect (ORE) in face recognition: they recognize own-race faces better than other-race faces. The present study tested the hypothesis that individuals would also scan own- and other-race faces differently. We asked Chinese participants to remember Chinese and Caucasian faces and we tested their memory of the faces over five testing blocks. The participants' eye movements were recorded with the use of an eye tracker. The data were analyzed with an Area of Interest approach using the key AOIs of a face (eyes, nose, and mouth). Also, we used the iMap toolbox to analyze the raw data of participants' fixation on each pixel of the entire face. Results from both types of analyses strongly supported the hypothesis. When viewing target Chinese or Caucasian faces, Chinese participants spent a significantly greater proportion of fixation time on the eyes of other-race Caucasian faces than the eyes of own-race Chinese faces. In contrast, they spent a significantly greater proportion of fixation time on the nose and mouth of Chinese faces than the nose and mouth of Caucasian faces. This pattern of differential fixation, for own- and other-race eyes and nose in particular, was consistent even as participants became increasingly familiar with the target faces of both races. The results could not be explained by the perceptual salience of the Chinese nose or Caucasian eyes because these features were not differentially salient across the races. Our results are discussed in terms of the facial morphological differences between Chinese and Caucasian faces and the enculturation of mutual gaze norms in East Asian cultures.  相似文献   

5.
Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female) and 23 (11 male and 12 female) normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the eye movements during facial observations differed among participant groups. The deaf group looked at the eyes more frequently and for longer duration than the nose whereas the hearing group focused on the nose (or the central region of face) more than the eyes. These results suggest that the strategy employed to extract visual information when viewing static faces may differ between deaf and hearing people.  相似文献   

6.

Background

Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved.

Methodology/Principal Findings

We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers'' fixations. Strikingly, in constrained Spotlight conditions (2° and 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region.

Conclusions/Significance

Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture.  相似文献   

7.
East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar.  相似文献   

8.
This work builds on the enfacement effect. This effect occurs when experiencing a rhythmic stimulation on one’s cheek while seeing someone else’s face being touched in a synchronous way. This typically leads to cognitive and social-cognitive effects similar to self-other merging. In two studies, we demonstrate that this multisensory stimulation can change the evaluation of the other’s face. In the first study, participants judged the stranger’s face and similar faces as being more trustworthy after synchrony, but not after asynchrony. Synchrony interacted with the order of the stroking; hence trustworthiness only changed when the synchronous stimulation occurred before the asynchronous one. In the second study, a synchronous stimulation caused participants to remember the stranger’s face as more trustworthy, but again only when the synchronous stimulation came before the asynchronous one. The results of both studies show that order of stroking creates a context in which multisensory synchrony can affect the trustworthiness of faces.  相似文献   

9.
In perceptual decision-making, a person’s response on a given trial is influenced by their response on the immediately preceding trial. This sequential effect was initially demonstrated in psychophysical tasks, but has now been found in more complex, real-world judgements. The similarity of the current and previous stimuli determines the nature of the effect, with more similar items producing assimilation in judgements, while less similarity can cause a contrast effect. Previous research found assimilation in ratings of facial attractiveness, and here, we investigated whether this effect is influenced by the social categories of the faces presented. Over three experiments, participants rated the attractiveness of own- (White) and other-race (Chinese) faces of both sexes that appeared successively. Through blocking trials by race (Experiment 1), sex (Experiment 2), or both dimensions (Experiment 3), we could examine how sequential judgements were altered by the salience of different social categories in face sequences. For sequences that varied in sex alone, own-race faces showed significantly less opposite-sex assimilation (male and female faces perceived as dissimilar), while other-race faces showed equal assimilation for opposite- and same-sex sequences (male and female faces were not differentiated). For sequences that varied in race alone, categorisation by race resulted in no opposite-race assimilation for either sex of face (White and Chinese faces perceived as dissimilar). For sequences that varied in both race and sex, same-category assimilation was significantly greater than opposite-category. Our results suggest that the race of a face represents a superordinate category relative to sex. These findings demonstrate the importance of social categories when considering sequential judgements of faces, and also highlight a novel approach for investigating how multiple social dimensions interact during decision-making.  相似文献   

10.
The term “own-race bias” refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants'' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants'' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias.  相似文献   

11.
Typically developing (TD) infants enhance their learning of spoken language by observing speakers’ mouth movements. Given the fact that word learning is seriously delayed in most children with neurodevelopmental disorders, we hypothesized that this delay partly results from differences in visual face scanning, e.g., focusing attention away from the mouth. To test this hypothesis, we used an eye tracker to measure visual attention in 95 infants and toddlers with Down syndrome (DS), fragile X syndrome (FXS), and Williams syndrome (WS), and compared their data to 25 chronological- and mental-age matched 16-month-old TD controls. We presented participants with two talking faces (one on each side of the screen) and a sound (/ga/). One face (the congruent face) mouthed the syllable that the participants could hear (i.e., /ga/), while the other face (the incongruent face) mouthed a different syllable (/ba/) from the one they could hear. As expected, we found that TD children with a relatively large vocabulary made more fixations to the mouth region of the incongruent face than elsewhere. However, toddlers with FXS or WS who had a relatively large receptive vocabulary made more fixations to the eyes (rather than the mouth) of the incongruent face. In DS, by contrast, fixations to the speaker’s overall face (rather than to her eyes or mouth) predicted vocabulary size. These findings suggest that, at some point in development, different processes or strategies relating to visual attention are involved in language acquisition in DS, FXS, and WS. This knowledge may help further explain why language is delayed in children with neurodevelopmental disorders. It also raises the possibility that syndrome-specific interventions should include an early focus on efficient face-scanning behaviour.  相似文献   

12.
Dogs exhibit characteristic looking patterns when looking at human faces but little is known about what the underlying cognitive mechanisms are and how much these are influenced by individual experience. In Experiment 1, seven dogs were trained in a simultaneous discrimination procedure to assess whether they could discriminate a) the owner''s face parts (eyes, nose or mouth) presented in isolation and b) whole faces where the same parts were covered. Dogs discriminated all the three parts of the owner''s face presented in isolation, but needed fewer sessions to reach the learning criterion for the eyes than for both nose and mouth. Moreover, covering the eyes region significantly disrupted face discriminability compared to the whole face condition while such difference was not found when the nose or mouth was hidden. In Experiment 2, dogs were presented with manipulated images of the owner''s face (inverted, blurred, scrambled, grey-scale) to test the relative contribution of part-based and configural processing in the discrimination of human faces. Furthermore, by comparing the dogs enrolled in the previous experiment and seven ‘naïve’ dogs we examined if the relative contribution of part-based and configural processing was affected by dogs'' experience with the face stimuli. Naïve dogs discriminated the owner only when configural information was provided, whereas expert dogs could discriminate the owner also when part-based processing was necessary. The present study provides the first evidence that dogs can discriminate isolated internal features of a human face and corroborate previous reports of salience of the eyes region for human face processing. Although the reliance on part-perception may be increased by specific experience, our findings suggest that human face discrimination by dogs relies mainly on configural rather than on part-based elaboration.  相似文献   

13.
Human and nonhuman primates comprehend the actions of other individuals by detecting social cues, including others’ goal-directed motor actions and faces. However, little is known about how this information is integrated with action understanding. Here, we present the ontogenetic and evolutionary foundations of this capacity by comparing face-scanning patterns of chimpanzees and humans as they viewed goal-directed human actions within contexts that differ in whether or not the predicted goal is achieved. Human adults and children attend to the actor’s face during action sequences, and this tendency is particularly pronounced in adults when observing that the predicted goal is not achieved. Chimpanzees rarely attend to the actor’s face during the goal-directed action, regardless of whether the predicted action goal is achieved or not. These results suggest that in humans, but not chimpanzees, attention to actor’s faces conveying referential information toward the target object indicates the process of observers making inferences about the intentionality of an action. Furthermore, this remarkable predisposition to observe others’ actions by integrating the prediction of action goals and the actor’s intention is developmentally acquired.  相似文献   

14.

Background

Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern.

Methodology/Principal Findings

Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face.

Conclusions/Significance

These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures.  相似文献   

15.

Background

Human core body temperature is kept quasi-constant regardless of varying thermal environments. It is well known that physiological thermoregulatory systems are under the control of central and peripheral sensory organs that are sensitive to thermal energy. If these systems wrongly respond to non-thermal stimuli, it may disturb human homeostasis.

Methods

Fifteen participants viewed video images evoking hot or cold impressions in a thermally constant environment. Cardiovascular indices were recorded during the experiments. Correlations between the ‘hot-cold’ impression scores and cardiovascular indices were calculated.

Results

The changes of heart rate, cardiac output, and total peripheral resistance were significantly correlated with the ‘hot-cold’ impression scores, and the tendencies were similar to those in actual thermal environments corresponding to the impressions.

Conclusions

The present results suggest that visual information without any thermal energy can affect physiological thermoregulatory systems at least superficially. To avoid such ‘virtual’ environments disturbing human homeostasis, further study and more attention are needed.  相似文献   

16.
Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates.  相似文献   

17.
Just like other face dimensions, age influences the way faces are processed by adults as well as by children. However, it remains unclear under what conditions exactly such influence occurs at both ages, in that there is some mixed evidence concerning the presence of a systematic processing advantage for peer faces (own-age bias) across the lifespan. Inconsistency in the results may stem from the fact that the individual’s face representation adapts to represent the most predominant age traits of the faces present in the environment, which is reflective of the individual’s specific living conditions and social experience. In the current study we investigated the processing of younger and older adult faces in two groups of adults (Experiment 1) and two groups of 3-year-old children (Experiment 2) who accumulated different amounts of experience with elderly people. Contact with elderly adults influenced the extent to which both adult and child participants showed greater discrimination abilities and stronger sensitivity to configural/featural cues in younger versus older adult faces, as measured by the size of the inversion effect. In children, the size of the inversion effect for older adult faces was also significantly correlated with the amount of contact with elderly people. These results show that, in both adults and children, visual experience with older adult faces can tune perceptual processing strategies to the point of abolishing the discrimination disadvantage that participants typically manifest for those faces in comparison to younger adult faces.  相似文献   

18.
Behavioral choice alters one’s preference rather than simply reflecting it. This effect to fit preferences with past choice, is known as “choice-induced preference change.” After making a choice between two equally attractive options, one tends to rate the chosen option better than they initially did and/or the unchosen option worse. The present study examined how behavioral choice changes subsequent preference, using facial images for the choice options as well as blind choice techniques. Participants rated their facial preference for each face, and chose between two equally preferred faces and subsequently rated their facial preference. Results from four experiments demonstrated that randomly chosen faces were more preferred only after participants were required to choose “a preferred face,” (in Experiment 1) but not “an unpreferred face,” (in Experiment 2) or “a rounder face” (in Experiment 3). Further, preference change was still observed after participants were informed that choices were actually random (in Experiment 4). Our findings provide new and important implications characterizing the conditions under which random choice changes preference, and show that people are tempted to make a biased evaluation even after they know that they did not make the choice for themselves.  相似文献   

19.
Animals often respond fearfully when encountering eyes or eye-like shapes. Although gaze aversion has been documented in mammals when avoiding group-member conflict, the importance of eye coloration during interactions between conspecifics has yet to be examined in non-primate species. Jackdaws (Corvus monedula) have near-white irides, which are conspicuous against their dark feathers and visible when seen from outside the cavities where they nest. Because jackdaws compete for nest sites, their conspicuous eyes may act as a warning signal to indicate that a nest is occupied and deter intrusions by conspecifics. We tested whether jackdaws’ pale irides serve as a deterrent to prospecting conspecifics by comparing prospectors’ behaviour towards nest-boxes displaying images with bright eyes (BEs) only, a jackdaw face with natural BEs, or a jackdaw face with dark eyes. The jackdaw face with BEs was most effective in deterring birds from making contact with nest-boxes, whereas both BE conditions reduced the amount of time jackdaws spent in proximity to the image. We suggest BEs in jackdaws may function to prevent conspecific competitors from approaching occupied nest sites.  相似文献   

20.
Criminal investigations often use photographic evidence to identify suspects. Here we combined robust face perception and high-resolution photography to mine face photographs for hidden information. By zooming in on high-resolution face photographs, we were able to recover images of unseen bystanders from reflections in the subjects'' eyes. To establish whether these bystanders could be identified from the reflection images, we presented them as stimuli in a face matching task (Experiment 1). Accuracy in the face matching task was well above chance (50%), despite the unpromising source of the stimuli. Participants who were unfamiliar with the bystanders'' faces (n = 16) performed at 71% accuracy [t(15) = 7.64, p<.0001, d = 1.91], and participants who were familiar with the faces (n = 16) performed at 84% accuracy [t(15) = 11.15, p<.0001, d = 2.79]. In a test of spontaneous recognition (Experiment 2), observers could reliably name a familiar face from an eye reflection image. For crimes in which the victims are photographed (e.g., hostage taking, child sex abuse), reflections in the eyes of the photographic subject could help to identify perpetrators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号