首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The visual system is tuned for rapid detection of faces, with the fastest choice saccade to a face at 100ms. Familiar faces have a more robust representation than do unfamiliar faces, and are detected faster in the absence of awareness and with reduced attentional resources. Faces of family and close friends become familiar over a protracted period involving learning the unique visual appearance, including a view-invariant representation, as well as person knowledge. We investigated the effect of personal familiarity on the earliest stages of face processing by using a saccadic-choice task to measure how fast familiar face detection can happen. Subjects made correct and reliable saccades to familiar faces when unfamiliar faces were distractors at 180ms—very rapid saccades that are 30 to 70ms earlier than the earliest evoked potential modulated by familiarity. By contrast, accuracy of saccades to unfamiliar faces with familiar faces as distractors did not exceed chance. Saccades to faces with object distractors were even faster (110 to 120 ms) and equivalent for familiar and unfamiliar faces, indicating that familiarity does not affect ultra-rapid saccades. We propose that detectors of diagnostic facial features for familiar faces develop in visual cortices through learning and allow rapid detection that precedes explicit recognition of identity.  相似文献   

2.
Habibi R  Khurana B 《PloS one》2012,7(2):e32377
Facial recognition is key to social interaction, however with unfamiliar faces only generic information, in the form of facial stereotypes such as gender and age is available. Therefore is generic information more prominent in unfamiliar versus familiar face processing? In order to address the question we tapped into two relatively disparate stages of face processing. At the early stages of encoding, we employed perceptual masking to reveal that only perception of unfamiliar face targets is affected by the gender of the facial masks. At the semantic end; using a priming paradigm, we found that while to-be-ignored unfamiliar faces prime lexical decisions to gender congruent stereotypic words, familiar faces do not. Our findings indicate that gender is a more salient dimension in unfamiliar relative to familiar face processing, both in early perceptual stages as well as later semantic stages of person construal.  相似文献   

3.

Background

Previous research on the reward system in autism spectrum disorders (ASD) suggests that children with ASD anticipate and process social rewards differently than typically developing (TD) children—but has focused on the reward value of unfamiliar face stimuli. Children with ASD process faces differently than their TD peers. Previous research has focused on face processing of unfamiliar faces, but less is known about how children with ASD process familiar faces. The current study investigated how children with ASD anticipate rewards accompanied by familiar versus unfamiliar faces.

Methods

The stimulus preceding negativity (SPN) of the event-related potential (ERP) was utilized to measure reward anticipation. Participants were 6- to 10-year-olds with (N = 14) and without (N = 14) ASD. Children were presented with rewards accompanied by incidental face or non-face stimuli that were either familiar (caregivers) or unfamiliar. All non-face stimuli were composed of scrambled face elements in the shape of arrows, controlling for visual properties.

Results

No significant differences between familiar versus unfamiliar faces were found for either group. When collapsing across familiarity, TD children showed larger reward anticipation to face versus non-face stimuli, whereas children with ASD did not show differential responses to these stimulus types. Magnitude of reward anticipation to faces was significantly correlated with behavioral measures of social impairment in the ASD group.

Conclusions

The findings do not provide evidence for differential reward anticipation for familiar versus unfamiliar face stimuli in children with or without ASD. These findings replicate previous work suggesting that TD children anticipate rewards accompanied by social stimuli more than rewards accompanied by non-social stimuli. The results do not support the idea that familiarity normalizes reward anticipation in children with ASD. Our findings also suggest that magnitude of reward anticipation to faces is correlated with levels of social impairment for children with ASD.  相似文献   

4.
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.  相似文献   

5.
Photographs are often used to establish the identity of an individual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research.  相似文献   

6.
In rare cases, damage to the temporal lobe causes a selective impairment in the ability to learn new faces, a condition known as prosopamnesia [1]. Here we present the case of an individual with prosopamnesia in the absence of any acquired structural lesion. "C" shows intact processing of simple and complex nonface objects, but her ability to learn new faces is severely impaired. We used a neural marker of perceptual learning known as repetition suppression to examine functioning within C's fusiform face area (FFA), a region of cortex involved in face perception [2]. For comparison, we examined repetition suppression in the scene-selective parahippocampal place area (PPA) [3]. As expected, normal controls showed significant region-specific attenuation of neural activity across repetitions of each stimulus class. C also showed normal attenuation within the PPA to familiar and unfamiliar scenes, and within the FFA to familiar faces. Critically, however, she failed to show any adaptive change within the FFA for repeated unfamiliar faces, despite a face-specific blood-oxygen-dependent response (BOLD) response in her FFA during viewing of face stimuli. Our findings suggest that in developmental prosopamnesia, the FFA cannot maintain stable representations of new faces for subsequent recall or recognition.  相似文献   

7.
Different kinds of known faces activate brain areas to dissimilar degrees. However, the tuning to type of knowledge, and the temporal course of activation, of each area have not been well characterized. Here we measured, with functional magnetic resonance imaging, brain activity elicited by unfamiliar, visually familiar, and personally-familiar faces. We assessed response amplitude and duration using flexible hemodynamic response functions, as well as the tuning to face type, of regions within the face processing system. Core face processing areas (occipital and fusiform face areas) responded to all types of faces with only small differences in amplitude and duration. In contrast, most areas of the extended face processing system (medial orbito-frontal, anterior and posterior cingulate) had weak responses to unfamiliar and visually-familiar faces, but were highly tuned and exhibited prolonged responses to personally-familiar faces. This indicates that the neural processing of different types of familiar faces not only differs in degree, but is probably mediated by qualitatively distinct mechanisms.  相似文献   

8.
Sun D  Chan CC  Lee TM 《PloS one》2012,7(2):e31250
Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as "familiar" or "unfamiliar." This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as "familiar" elicited more negative-going N400f in the central and right temporal regions than those classified as "unfamiliar." The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study.  相似文献   

9.
The processing of faces relies on a specialized neural system comprising bilateral cortical structures with a dominance of the right hemisphere. However, due to inconsistencies of earlier findings as well as more recent results such functional lateralization has become a topic of discussion. In particular, studies employing behavioural tasks and electrophysiological methods indicate a dominance of the right hemisphere during face perception only in men whereas women exhibit symmetric and bilateral face processing. The aim of this study was to further investigate such sex differences in hemispheric processing of personally familiar and opposite-sex faces using whole-head magnetoencephalography (MEG). We found a right-lateralized M170-component in occipito-temporal sensor clusters in men as opposed to a bilateral response in women. Furthermore, the same pattern was obtained in performing dipole localization and determining dipole strength in the M170-timewindow. These results suggest asymmetric involvement of face-responsive neural structures in men and allow to ascribe this asymmetry to the fusiform gyrus. This specifies findings from previous investigations employing event-related potentials (ERP) and LORETA reconstruction methods yielding rather extended bilateral activations showing left asymmetry in women and right lateralization in men. We discuss our finding of an asymmetric fusiform activation pattern in men in terms of holistic face processing during face evaluation and sex differences with regard to visual strategies in general and interest for opposite faces in special. Taken together the pattern of hemispheric specialization observed here yields new insights into sex differences in face perception and entails further questions about interactions between biological sex, psychological gender and influences that might be stimulus-driven or task dependent.  相似文献   

10.
The neural mechanisms for the perception of face and motion were studied using psychophysical threshold measurements, event-related potentials (ERPs), and functional magnetic resonance imaging (fMRI). A face-specific ERP component, N170, was recorded over the posterior temporal cortex. Removal of the high-spatial-frequency components of the face altered the perception of familiar faces significantly, and familiarity can facilitate the cortico-cortical processing of facial perceptions. Similarly, the high-spatial-frequency components of the face seemed to be crucial for the recognition of facial expressions. Aging and visuospatial impairments affected motion perception significantly. Two distinct components of motion ERPs, N170 and P200, were recorded over the parietal region. The former was related to horizontal motion perception while the latter reflected the perception of radial optic flow motion. The results of fMRI showed that horizontal movements of objects and radial optic flow motion were perceived differently in the V5/MT and superior parietal lobe. We conclude that an integrated approach can provide useful information on spatial and temporal processing of face and motion non-invasively.  相似文献   

11.
A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach.  相似文献   

12.
We investigated whether personally familiar faces are preferentially processed in conditions of reduced attentional resources and in the absence of conscious awareness. In the first experiment, we used Rapid Serial Visual Presentation (RSVP) to test the susceptibility of familiar faces and faces of strangers to the attentional blink. In the second experiment, we used continuous flash interocular suppression to render stimuli invisible and measured face detection time for personally familiar faces as compared to faces of strangers. In both experiments we found an advantage for detection of personally familiar faces as compared to faces of strangers. Our data suggest that the identity of faces is processed with reduced attentional resources and even in the absence of awareness. Our results show that this facilitated processing of familiar faces cannot be attributed to detection of low-level visual features and that a learned unique configuration of facial features can influence preconscious perceptual processing.  相似文献   

13.
To investigate the neural representations of faces in primates, particularly in relation to their personal familiarity or unfamiliarity, neuronal activities were chronically recorded from the ventral portion of the anterior inferior temporal cortex (AITv) of macaque monkeys during the performance of a facial identification task using either personally familiar or unfamiliar faces as stimuli. By calculating the correlation coefficients between neuronal responses to the faces for all possible pairs of faces given in the task and then using the coefficients as neuronal population-based similarity measures between the faces in pairs, we analyzed the similarity/dissimilarity relationship between the faces, which were potentially represented by the activities of a population of the face-responsive neurons recorded in the area AITv. The results showed that, for personally familiar faces, different identities were represented by different patterns of activities of the population of AITv neurons irrespective of the view (e.g., front, 90° left, etc.), while different views were not represented independently of their facial identities, which was consistent with our previous report. In the case of personally unfamiliar faces, the faces possessing different identities but presented in the same frontal view were represented as similar, which contrasts with the results for personally familiar faces. These results, taken together, outline the neuronal representations of personally familiar and unfamiliar faces in the AITv neuronal population.  相似文献   

14.
The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans.  相似文献   

15.
The present study tested whether neural sensitivity to salient emotional facial expressions was influenced by emotional expectations induced by a cue that validly predicted the expression of a subsequently presented target face. Event-related potentials (ERPs) elicited by fearful and neutral faces were recorded while participants performed a gender discrimination task under cued (‘expected’) and uncued (‘unexpected’) conditions. The behavioral results revealed that accuracy was lower for fearful compared with neutral faces in the unexpected condition, while accuracy was similar for fearful and neutral faces in the expected condition. ERP data revealed increased amplitudes in the P2 component and 200–250 ms interval for unexpected fearful versus neutral faces. By contrast, ERP responses were similar for fearful and neutral faces in the expected condition. These findings indicate that human neural sensitivity to fearful faces is modulated by emotional expectations. Although the neural system is sensitive to unpredictable emotionally salient stimuli, sensitivity to salient stimuli is reduced when these stimuli are predictable.  相似文献   

16.
There is a growing body of literature to show that color can convey information, owing to its emotionally meaningful associations. Most research so far has focused on negative hue–meaning associations (e.g., red) with the exception of the positive aspects associated with green. We therefore set out to investigate the positive associations of two colors (i.e., green and pink), using an emotional facial expression recognition task in which colors provided the emotional contextual information for the face processing. In two experiments, green and pink backgrounds enhanced happy face recognition and impaired sad face recognition, compared with a control color (gray). Our findings therefore suggest that because green and pink both convey positive information, they facilitate the processing of emotionally congruent facial expressions (i.e., faces expressing happiness) and interfere with that of incongruent facial expressions (i.e., faces expressing sadness). Data also revealed a positive association for white. Results are discussed within the theoretical framework of emotional cue processing and color meaning.  相似文献   

17.
Jiang Y  He S 《Current biology : CB》2006,16(20):2023-2029
Perceiving faces is critical for social interaction. Evidence suggests that different neural pathways may be responsible for processing face identity and expression information. By using functional magnetic resonance imaging (fMRI), we measured brain responses when observers viewed neutral, fearful, and scrambled faces, either visible or rendered invisible through interocular suppression. The right fusiform face area (FFA), the right superior temporal sulcus (STS), and the amygdala responded strongly to visible faces. However, when face images became invisible, activity in FFA to both neutral and fearful faces was much reduced, although still measurable; activity in the STS was robust only to invisible fearful faces but not to neutral faces. Activity in the amygdala was equally strong in both the visible and invisible conditions to fearful faces but much weaker in the invisible condition for the neutral faces. In the invisible condition, amygdala activity was highly correlated with that of the STS but not with FFA. The results in the invisible condition support the existence of dissociable neural systems specialized for processing facial identity and expression information. When images are invisible, cortical responses may reflect primarily feed-forward visual-information processing and thus allow us to reveal the distinct functions of FFA and STS.  相似文献   

18.
An early orientation to faces is followed by a gradual development of face processing skills. During the course of maturation, children acquire the ability to learn new faces and to deal with facial transformations. Some skills are achieved more quickly than others. Moreover, encoding ability in young children is somewhat different from that shown by older children. The younger groups fail to take advantage of increased inspection time and stimulus characteristics such as facial distinctiveness. They are also more likely to be confused by alterations in background context. Although with familiar faces they reveal very similar identity priming effects to older children and adults, younger children display a relative inefficiency in categorizing faces as being that of a target unless it is noticeably dissimilar. Young children are more likely than older people to prefer positive caricatures of certain faces, which is not consistent with the view that caricature effects are simple reflections of a general expertise with faces.  相似文献   

19.
Repeated visual processing of an unfamiliar face suppresses neural activity in face-specific areas of the occipito-temporal cortex. This "repetition suppression" (RS) is a primitive mechanism involved in learning of unfamiliar faces, which can be detected through amplitude reduction of the N170 event-related potential (ERP). The dorsolateral prefrontal cortex (DLPFC) exerts top-down influence on early visual processing. However, its contribution to N170 RS and learning of unfamiliar faces remains unclear. Transcranial direct current stimulation (tDCS) transiently increases or decreases cortical excitability, as a function of polarity. We hypothesized that DLPFC excitability modulation by tDCS would cause polarity-dependent modulations of N170 RS during encoding of unfamiliar faces. tDCS-induced N170 RS enhancement would improve long-term recognition reaction time (RT) and/or accuracy rates, whereas N170 RS impairment would compromise recognition ability. Participants underwent three tDCS conditions in random order at ∼72 hour intervals: right anodal/left cathodal, right cathodal/left anodal and sham. Immediately following tDCS conditions, an EEG was recorded during encoding of unfamiliar faces for assessment of P100 and N170 visual ERPs. The P3a component was analyzed to detect prefrontal function modulation. Recognition tasks were administered ∼72 hours following encoding. Results indicate the right anodal/left cathodal condition facilitated N170 RS and induced larger P3a amplitudes, leading to faster recognition RT. Conversely, the right cathodal/left anodal condition caused N170 amplitude and RTs to increase, and a delay in P3a latency. These data demonstrate that DLPFC excitability modulation can influence early visual encoding of unfamiliar faces, highlighting the importance of DLPFC in basic learning mechanisms.  相似文献   

20.
Adaptation-related aftereffects (AEs) show how face perception can be altered by recent perceptual experiences. Along with contrastive behavioural biases, modulations of the early event-related potentials (ERPs) were typically reported on categorical levels. Nevertheless, the role of the adaptor stimulus per se for face identity-specific AEs is not completely understood and was therefore investigated in the present study. Participants were adapted to faces (S1s) varying systematically on a morphing continuum between pairs of famous identities (identities A and B), or to Fourier phase-randomized faces, and had to match the subsequently presented ambiguous faces (S2s; 50/50% identity A/B) to one of the respective original faces. We found that S1s identical with or near to the original identities led to strong contrastive biases with more identity B responses following A adaptation and vice versa. In addition, the closer S1s were to the 50/50% S2 on the morphing continuum, the smaller the magnitude of the AE was. The relation between S1s and AE was, however, not linear. Additionally, stronger AEs were accompanied by faster reaction times. Analyses of the simultaneously recorded ERPs revealed categorical adaptation effects starting at 100 ms post-stimulus onset, that were most pronounced at around 125–240 ms for occipito-temporal sites over both hemispheres. S1-specific amplitude modulations were found at around 300–400 ms. Response-specific analyses of ERPs showed reduced voltages starting at around 125 ms when the S1 biased perception in a contrastive way as compared to when it did not. Our results suggest that face identity AEs do not only depend on physical differences between S1 and S2, but also on perceptual factors, such as the ambiguity of S1. Furthermore, short-term plasticity of face identity processing might work in parallel to object-category processing, and is reflected in the first 400 ms of the ERP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号