首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism.  相似文献   

2.
This study investigated whether an odor can affect infants'' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother''s body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.  相似文献   

3.
Newborns have an innate system for preferentially looking at an upright human face. This face preference behaviour disappears at approximately one month of age and reappears a few months later. However, the neural mechanisms underlying this U-shaped behavioural change remain unclear. Here, we isolate the functional development of the cortical visual pathway for face processing using S-cone-isolating stimulation, which blinds the subcortical visual pathway. Using luminance stimuli, which are conveyed by both the subcortical and cortical visual pathways, the preference for upright faces was not observed in two-month-old infants, but it was observed in four- and six-month-old infants, confirming the recovery phase of the U-shaped development. By contrast, using S-cone stimuli, two-month-old infants already showed a preference for upright faces, as did four- and six-month-old infants, demonstrating that the cortical visual pathway for face processing is already functioning at the bottom of the U-shape at two months of age. The present results suggest that the transient functional deterioration stems from a conflict between the subcortical and cortical functional pathways, and that the recovery thereafter involves establishing a level of coordination between the two pathways.  相似文献   

4.
We report a novel effect in which the visual perception of eye-gaze and arrow cues change the way we perceive sound. In our experiments, subjects first saw an arrow or gazing face, and then heard a brief sound originating from one of six locations. Perceived sound origins were shifted in the direction indicated by the arrows or eye-gaze. This perceptual shift was equivalent for both arrows and gazing faces and was unaffected by facial expression, consistent with a generic, supramodal attentional influence by exogenous cues.  相似文献   

5.
Mature face perception has its origins in the face experiences of infants. However, little is known about the basic statistics of faces in early visual environments. We used head cameras to capture and analyze over 72,000 infant-perspective scenes from 22 infants aged 1-11 months as they engaged in daily activities. The frequency of faces in these scenes declined markedly with age: for the youngest infants, faces were present 15 minutes in every waking hour but only 5 minutes for the oldest infants. In general, the available faces were well characterized by three properties: (1) they belonged to relatively few individuals; (2) they were close and visually large; and (3) they presented views showing both eyes. These three properties most strongly characterized the face corpora of our youngest infants and constitute environmental constraints on the early development of the visual system.  相似文献   

6.
Young infants are typically thought to prefer looking at smiling expressions. Although some accounts suggest that the preference is automatic and universal, we hypothesized that it is not rigid and may be influenced by other face dimensions, most notably the face’s gender. Infants are sensitive to the gender of faces; for example, 3-month-olds raised by female caregivers typically prefer female over male faces. We presented neutral versus smiling pairs of faces from the same female or male individuals to 3.5-month-old infants (n = 25), controlling for low-level cues. Infants looked longer to the smiling face when faces were female but longer to the neutral face when faces were male, i.e., there was an effect of face gender on the looking preference for smiling. The results indicate that a preference for smiling in 3.5-month-olds is limited to female faces, possibly reflective of differential experience with male and female faces.  相似文献   

7.
Young infants are known to prefer own-race faces to other race faces and recognize own-race faces better than other-race faces. However, it is entirely unclear as to whether infants also attend to different parts of own- and other-race faces differently, which may provide an important clue as to how and why the own-race face recognition advantage emerges so early. The present study used eye tracking methodology to investigate whether 6- to 10-month-old Caucasian infants (N = 37) have differential scanning patterns for dynamically displayed own- and other-race faces. We found that even though infants spent a similar amount of time looking at own- and other-race faces, with increased age, infants increasingly looked longer at the eyes of own-race faces and less at the mouths of own-race faces. These findings suggest experience-based tuning of the infant''s face processing system to optimally process own-race faces that are different in physiognomy from other-race faces. In addition, the present results, taken together with recent own- and other-race eye tracking findings with infants and adults, provide strong support for an enculturation hypothesis that East Asians and Westerners may be socialized to scan faces differently due to each culture''s conventions regarding mutual gaze during interpersonal communication.  相似文献   

8.
Human observers are remarkably proficient at recognizing expressions of emotions and at readily grouping them into distinct categories. When morphing one facial expression into another, the linear changes in low-level features are insufficient to describe the changes in perception, which instead follow an s-shaped function. Important questions are, whether there are single diagnostic regions in the face that drive categorical perception for certain parings of emotion expressions, and how information in those regions interacts when presented together. We report results from two experiments with morphed fear-anger expressions, where (a) half of the face was masked or (b) composite faces made up of different expressions were presented. When isolated upper and lower halves of faces were shown, the eyes were found to be almost as diagnostic as the whole face, with the response function showing a steep category boundary. In contrast, the mouth allowed for a substantially lesser amount of accuracy and responses followed a much flatter psychometric function. When a composite face consisting of mismatched upper and lower halves was used and observers were instructed to exclusively judge either the expression of mouth or eyes, the to-be-ignored part always influenced perception of the target region. In line with experiment 1, the eye region exerted a much stronger influence on mouth judgements than vice versa. Again, categorical perception was significantly more pronounced for upper halves of faces. The present study shows that identification of fear and anger in morphed faces relies heavily on information from the upper half of the face, most likely the eye region. Categorical perception is possible when only the upper face half is present, but compromised when only the lower part is shown. Moreover, observers tend to integrate all available features of a face, even when trying to focus on only one part.  相似文献   

9.
Dogs exhibit characteristic looking patterns when looking at human faces but little is known about what the underlying cognitive mechanisms are and how much these are influenced by individual experience. In Experiment 1, seven dogs were trained in a simultaneous discrimination procedure to assess whether they could discriminate a) the owner''s face parts (eyes, nose or mouth) presented in isolation and b) whole faces where the same parts were covered. Dogs discriminated all the three parts of the owner''s face presented in isolation, but needed fewer sessions to reach the learning criterion for the eyes than for both nose and mouth. Moreover, covering the eyes region significantly disrupted face discriminability compared to the whole face condition while such difference was not found when the nose or mouth was hidden. In Experiment 2, dogs were presented with manipulated images of the owner''s face (inverted, blurred, scrambled, grey-scale) to test the relative contribution of part-based and configural processing in the discrimination of human faces. Furthermore, by comparing the dogs enrolled in the previous experiment and seven ‘naïve’ dogs we examined if the relative contribution of part-based and configural processing was affected by dogs'' experience with the face stimuli. Naïve dogs discriminated the owner only when configural information was provided, whereas expert dogs could discriminate the owner also when part-based processing was necessary. The present study provides the first evidence that dogs can discriminate isolated internal features of a human face and corroborate previous reports of salience of the eyes region for human face processing. Although the reliance on part-perception may be increased by specific experience, our findings suggest that human face discrimination by dogs relies mainly on configural rather than on part-based elaboration.  相似文献   

10.
Guellai B  Streri A 《PloS one》2011,6(4):e18610
Previous studies showed that, from birth, speech and eye gaze are two important cues in guiding early face processing and social cognition. These studies tested the role of each cue independently; however, infants normally perceive speech and eye gaze together. Using a familiarization-test procedure, we first familiarized newborn infants (n = 24) with videos of unfamiliar talking faces with either direct gaze or averted gaze. Newborns were then tested with photographs of the previously seen face and of a new one. The newborns looked longer at the face that previously talked to them, but only in the direct gaze condition. These results highlight the importance of both speech and eye gaze as socio-communicative cues by which infants identify others. They suggest that gaze and infant-directed speech, experienced together, are powerful cues for the development of early social skills.  相似文献   

11.
Individuation and holistic processing of faces in rhesus monkeys   总被引:1,自引:0,他引:1  
Despite considerable evidence that neural activity in monkeys reflects various aspects of face perception, relatively little is known about monkeys' face processing abilities. Two characteristics of face processing observed in humans are a subordinate-level entry point, here, the default recognition of faces at the subordinate, rather than basic, level of categorization, and holistic effects, i.e. perception of facial displays as an integrated whole. The present study used an adaptation paradigm to test whether untrained rhesus macaques (Macaca mulatta) display these hallmarks of face processing. In experiments 1 and 2, macaques showed greater rebound from adaptation to conspecific faces than to other animals at the individual or subordinate level. In experiment 3, exchanging only the bottom half of a monkey face produced greater rebound in aligned than in misaligned composites, indicating that for normal, aligned faces, the new bottom half may have influenced the perception of the whole face. Scan path analysis supported this assertion: during rebound, fixation to the unchanged eye region was renewed, but only for aligned stimuli. These experiments show that macaques naturally display the distinguishing characteristics of face processing seen in humans and provide the first clear demonstration that holistic information guides scan paths for conspecific faces.  相似文献   

12.
East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar.  相似文献   

13.
Experience plays a crucial role in the development of the face processing system. At 6 months of age infants can discriminate individual faces from their own and other races. By 9 months of age this ability to process other-race faces is typically lost, due to minimal experience with other-race faces, and vast exposure to own-race faces, for which infants come to manifest expertise [1]. This is known as the Other Race Effect. In the current study, we demonstrate that exposing Caucasian infants to Chinese faces through perceptual training via picture books for a total of one hour between 6 and 9 months allows Caucasian infants to maintain the ability to discriminate Chinese faces at 9 months of age. The development of the processing of face race can be modified by training, highlighting the importance of early experience in shaping the face representation.  相似文献   

14.
Some people perceive themselves to look more, or less attractive than they are in reality. We investigated the role of emotions in enhancement and derogation effects; specifically, whether the propensity to experience positive and negative emotions affects how healthy we perceive our own face to look and how we judge ourselves against others. A psychophysical method was used to measure healthiness of self-image and social comparisons of healthiness. Participants who self-reported high positive (N = 20) or negative affectivity (N = 20) judged themselves against healthy (red-tinged) and unhealthy looking (green-tinged) versions of their own and stranger’s faces. An adaptive staircase procedure was used to measure perceptual thresholds. Participants high in positive affectivity were un-biased in their face health judgement. Participants high in negative affectivity on the other hand, judged themselves as equivalent to less healthy looking versions of their own face and a stranger’s face. Affective traits modulated self-image and social comparisons of healthiness. Face health judgement was also related to physical symptom perception and self-esteem; high physical symptom reports were associated a less healthy self-image and high self-reported (but not implicit) self-esteem was associated with more favourable social comparisons of healthiness. Subject to further validation, our novel face health judgement task could have utility as a perceptual measure of well-being. We are currently investigating whether face health judgement is sensitive to laboratory manipulations of mood.  相似文献   

15.
Typically developing (TD) infants enhance their learning of spoken language by observing speakers’ mouth movements. Given the fact that word learning is seriously delayed in most children with neurodevelopmental disorders, we hypothesized that this delay partly results from differences in visual face scanning, e.g., focusing attention away from the mouth. To test this hypothesis, we used an eye tracker to measure visual attention in 95 infants and toddlers with Down syndrome (DS), fragile X syndrome (FXS), and Williams syndrome (WS), and compared their data to 25 chronological- and mental-age matched 16-month-old TD controls. We presented participants with two talking faces (one on each side of the screen) and a sound (/ga/). One face (the congruent face) mouthed the syllable that the participants could hear (i.e., /ga/), while the other face (the incongruent face) mouthed a different syllable (/ba/) from the one they could hear. As expected, we found that TD children with a relatively large vocabulary made more fixations to the mouth region of the incongruent face than elsewhere. However, toddlers with FXS or WS who had a relatively large receptive vocabulary made more fixations to the eyes (rather than the mouth) of the incongruent face. In DS, by contrast, fixations to the speaker’s overall face (rather than to her eyes or mouth) predicted vocabulary size. These findings suggest that, at some point in development, different processes or strategies relating to visual attention are involved in language acquisition in DS, FXS, and WS. This knowledge may help further explain why language is delayed in children with neurodevelopmental disorders. It also raises the possibility that syndrome-specific interventions should include an early focus on efficient face-scanning behaviour.  相似文献   

16.
There is a large literature focused on the color perception of matte surface. However, recent research showed that the component of surface specular reflection, such as glossiness, also affects categorical color perception. For instance, the color term “gold” was used to name high specular stimuli within a specific range of chromaticity, which overlaps with those of yellow and orange for low specular stimuli. In the present study, we investigated whether the component of surface specular reflectance affects the color perception of 5- to 8-month-old infants by using the preferential looking technique. In the first experiment, we conducted a simple test to determine whether infants perceive yellow and gold as the same color by comparing their preference for these colors over green. If the infants perceive yellow and gold as the same color, they would show similar preference scores over green. On the other hand, if infants show different preference scores over green, it indicates that infants do not perceive yellow and gold as the same color. Only the 7–8 month-old infants showed different preference scores for gold and yellow over green. This result indicates that the 7–8 month-old infants perceive gold and yellow as different colors. In Experiment 2, we eliminated the component of specular reflectance on the gold surface and presented it against green to infants. A similar preference score of yellow over green was obtained. This result suggests that the difference between the preference scores for gold and yellow over green in Experiment 1 was based on representations of glossiness.  相似文献   

17.
Human beings do not passively perceive important social features about others such as race and age in social interactions. Instead, it is proposed that humans might continuously generate predictions about these social features based on prior similar experiences. Pre-awareness of racial information conveyed by others'' faces enables individuals to act in “culturally appropriate” ways, which is useful for interpersonal relations in different ethnicity groups. However, little is known about the effects of prediction on the perception for own-race and other-race faces. Here, we addressed this issue using high temporal resolution event-related potential techniques. In total, data from 24 participants (13 women and 11 men) were analyzed. It was found that the N170 amplitudes elicited by other-race faces, but not own-race faces, were significantly smaller in the predictable condition compared to the unpredictable condition, reflecting a switch to holistic processing of other-race faces when those faces were predictable. In this respect, top-down prediction about face race might contribute to the elimination of the other-race effect (one face recognition impairment). Furthermore, smaller P300 amplitudes were observed for the predictable than for unpredictable conditions, which suggested that the prediction of race reduced the neural responses of human brains.  相似文献   

18.
‘Infant shyness’, in which infants react shyly to adult strangers, presents during the third quarter of the first year. Researchers claim that shy children over the age of three years are experiencing approach-avoidance conflicts. Counter-intuitively, shy children do not avoid the eyes when scanning faces; rather, they spend more time looking at the eye region than non-shy children do. It is currently unknown whether young infants show this conflicted shyness and its corresponding characteristic pattern of face scanning. Here, using infant behavioral questionnaires and an eye-tracking system, we found that highly shy infants had high scores for both approach and fear temperaments (i.e., approach-avoidance conflict) and that they showed longer dwell times in the eye regions than less shy infants during their initial fixations to facial stimuli. This initial hypersensitivity to the eyes was independent of whether the viewed faces were of their mothers or strangers. Moreover, highly shy infants preferred strangers with an averted gaze and face to strangers with a directed gaze and face. This initial scanning of the eye region and the overall preference for averted gaze faces were not explained solely by the infants’ age or temperament (i.e., approach or fear). We suggest that infant shyness involves a conflict in temperament between the desire to approach and the fear of strangers, and this conflict is the psychological mechanism underlying infants’ characteristic behavior in face scanning.  相似文献   

19.
A two-process probabilistic theory of emotion perception based on a non-linear combination of facial features is presented. Assuming that the upper and the lower part of the face function as the building blocks at the basis of emotion perception, an empirical test is provided with fear and happiness as target emotions. Subjects were presented with prototypical fearful and happy faces and with computer-generated chimerical expressions that were a combination of happy and fearful. Subjects were asked to indicate the emotions they perceive using an extensive list of emotions. We show that some emotions require a conjunction of the two halves of a face to be perceived, whereas for some other emotions only one half is sufficient. We demonstrate that chimerical faces give rise to the perception of genuine emotions. The findings provide evidence that different combinations of the two halves of a fearful and a happy face, either congruent or not, do generate the perception of emotions other than fear and happiness.  相似文献   

20.
Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号