首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A previous experiment showed that a chimpanzee performed better in searching for a target human face that differed in orientation from distractors when the target had an upright orientation than when targets had inverted or horizontal orientation [Tomonaga (1999a) Primate Res 15:215–229]. This upright superiority effect was also seen when using chimpanzee faces as targets but not when using photographs of a house. The present study sought to extend these results and explore factors affecting the face-specific upright superiority effect. Upright superiority was shown in a visual search for orientation when caricaturized human faces and dog faces were used as stimuli for the chimpanzee but not when shapes of a hand and chairs were presented. Thus, the configural properties of facial features, which cause an inversion effect in face recognition in humans and chimpanzees, were thought to be a source of the upright superiority effect in the visual search process. To examine this possibility, various stimuli manipulations were introduced in subsequent experiments. The results clearly show that the configuration of facial features plays a critical role in the upright superiority effect, and strongly suggest similarity in face processing in humans and chimpanzees.  相似文献   

2.
3.
Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.  相似文献   

4.
Humans have an impressive ability to discriminate between faces despite their similarity as visual patterns. This expertise relies on configural coding of spatial relations between face features and/or holistic coding of overall facial structure. These expert face-coding mechanisms appear to be engaged most effectively by upright faces, with inverted faces engaging primarily feature-coding mechanisms. We show that opposite figural aftereffects can be induced simultaneously for upright and inverted faces, demonstrating that distinct neural populations code upright and inverted faces. This result also suggests that expert (upright) face-coding mechanisms can be selectively adapted. These aftereffects occur for judgments of face normality and face gender and are robust to changes in face size, ruling out adaptation of low-level, retinotopically organized coding mechanisms. Our results suggest a resolution of a paradox in the face recognition literature. Neuroimaging studies have found surprisingly little orientation selectivity in the fusiform face area (FFA) despite evidence that this region plays a role in expert face coding and that expert face-coding mechanisms are selectively engaged by upright faces. Our results, demonstrating orientation-contingent adaptation of face-coding mechanisms, suggest that the FFA's apparent lack of orientation selectivity may be an artifact of averaging across distinct populations within the FFA that respond to upright and inverted faces.  相似文献   

5.
Atypical face processing plays a key role in social interaction difficulties encountered by individuals with autism. In the current fMRI study, the Thatcher illusion was used to investigate several aspects of face processing in 20 young adults with high-functioning autism spectrum disorder (ASD) and 20 matched neurotypical controls. “Thatcherized” stimuli were modified at either the eyes or the mouth and participants discriminated between pairs of faces while cued to attend to either of these features in upright and inverted orientation. Behavioral data confirmed sensitivity to the illusion and intact configural processing in ASD. Directing attention towards the eyes vs. the mouth in upright faces in ASD led to (1) improved discrimination accuracy; (2) increased activation in areas involved in social and emotional processing; (3) increased activation in subcortical face-processing areas. Our findings show that when explicitly cued to attend to the eyes, activation of cortical areas involved in face processing, including its social and emotional aspects, can be enhanced in autism. This suggests that impairments in face processing in autism may be caused by a deficit in social attention, and that giving specific cues to attend to the eye-region when performing behavioral therapies aimed at improving social skills may result in a better outcome.  相似文献   

6.
A two-alternative forced-choice discrimination task was used to assess whether baboons (N=7) spontaneously process qualitative (i.e., first-order) or quantitative (i.e., second-order) variations in the configural arrangement of facial features. Experiment 1 used as test stimuli second-order pictorial faces of humans or baboons in which the mouth and the eyes were rotated upside down relative to the normal face. Baboons readily discriminated two different normal faces but did not discriminate a normal face from its second-order modified version. Experiment 2 used human or baboon faces for which the first-order configural properties had been distorted by reversing the location of the eyes and mouth within the face. Discrimination was prompt with these stimuli. Experiment 3 replicated some of the conditions and the results of experiment 1, thus ruling out possible effects of learning. It is concluded that baboons are more adept at spontaneously processing first- than second-order configural facial properties, similar to what is known in the human developmental literature.  相似文献   

7.
The present study tested whether neural sensitivity to salient emotional facial expressions was influenced by emotional expectations induced by a cue that validly predicted the expression of a subsequently presented target face. Event-related potentials (ERPs) elicited by fearful and neutral faces were recorded while participants performed a gender discrimination task under cued (‘expected’) and uncued (‘unexpected’) conditions. The behavioral results revealed that accuracy was lower for fearful compared with neutral faces in the unexpected condition, while accuracy was similar for fearful and neutral faces in the expected condition. ERP data revealed increased amplitudes in the P2 component and 200–250 ms interval for unexpected fearful versus neutral faces. By contrast, ERP responses were similar for fearful and neutral faces in the expected condition. These findings indicate that human neural sensitivity to fearful faces is modulated by emotional expectations. Although the neural system is sensitive to unpredictable emotionally salient stimuli, sensitivity to salient stimuli is reduced when these stimuli are predictable.  相似文献   

8.
Rapid detection of evolutionarily relevant threats (e.g., fearful faces) is important for human survival. The ability to rapidly detect fearful faces exhibits high variability across individuals. The present study aimed to investigate the relationship between behavioral detection ability and brain activity, using both event-related potential (ERP) and event-related oscillation (ERO) measurements. Faces with fearful or neutral facial expressions were presented for 17 ms or 200 ms in a backward masking paradigm. Forty-two participants were required to discriminate facial expressions of the masked faces. The behavioral sensitivity index d'' showed that the detection ability to rapidly presented and masked fearful faces varied across participants. The ANOVA analyses showed that the facial expression, hemisphere, and presentation duration affected the grand-mean ERP (N1, P1, and N170) and ERO (below 20 Hz and lasted from 100 ms to 250 ms post-stimulus, mainly in theta band) brain activity. More importantly, the overall detection ability of 42 subjects was significantly correlated with the emotion effect (i.e., fearful vs. neutral) on ERP (r = 0.403) and ERO (r = 0.552) measurements. A higher d'' value was corresponding to a larger size of the emotional effect (i.e., fearful – neutral) of N170 amplitude and a larger size of the emotional effect of the specific ERO spectral power at the right hemisphere. The present results suggested a close link between behavioral detection ability and the N170 amplitude as well as the ERO spectral power below 20 Hz in individuals. The emotional effect size between fearful and neutral faces in brain activity may reflect the level of conscious awareness of fearful faces.  相似文献   

9.
Fu G  Hu CS  Wang Q  Quinn PC  Lee K 《PloS one》2012,7(6):e37688
It is well established that individuals show an other-race effect (ORE) in face recognition: they recognize own-race faces better than other-race faces. The present study tested the hypothesis that individuals would also scan own- and other-race faces differently. We asked Chinese participants to remember Chinese and Caucasian faces and we tested their memory of the faces over five testing blocks. The participants' eye movements were recorded with the use of an eye tracker. The data were analyzed with an Area of Interest approach using the key AOIs of a face (eyes, nose, and mouth). Also, we used the iMap toolbox to analyze the raw data of participants' fixation on each pixel of the entire face. Results from both types of analyses strongly supported the hypothesis. When viewing target Chinese or Caucasian faces, Chinese participants spent a significantly greater proportion of fixation time on the eyes of other-race Caucasian faces than the eyes of own-race Chinese faces. In contrast, they spent a significantly greater proportion of fixation time on the nose and mouth of Chinese faces than the nose and mouth of Caucasian faces. This pattern of differential fixation, for own- and other-race eyes and nose in particular, was consistent even as participants became increasingly familiar with the target faces of both races. The results could not be explained by the perceptual salience of the Chinese nose or Caucasian eyes because these features were not differentially salient across the races. Our results are discussed in terms of the facial morphological differences between Chinese and Caucasian faces and the enculturation of mutual gaze norms in East Asian cultures.  相似文献   

10.
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.  相似文献   

11.
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300.  相似文献   

12.
Stein T  Peelen MV  Sterzer P 《PloS one》2011,6(12):e29361
From the first days of life, humans preferentially orient towards upright faces, likely reflecting innate subcortical mechanisms. Here, we show that binocular rivalry can reveal face detection mechanisms in adults that are surprisingly similar to inborn face detection mechanism. We used continuous flash suppression (CFS), a variant of binocular rivalry, to render stimuli invisible at the beginning of each trial and measured the time upright and inverted stimuli needed to overcome such interocular suppression. Critically, specific stimulus properties previously shown to modulate looking preferences in neonates similarly modulated adults' awareness of faces presented during CFS. First, the advantage of upright faces in overcoming CFS was strongly modulated by contrast polarity and direction of illumination. Second, schematic patterns consisting of three dark blobs were suppressed for shorter durations when the arrangement of these blobs respected the face-like configuration of the eyes and the mouth, and this effect was modulated by contrast polarity. No such effects were obtained in a binocular control experiment not involving CFS, suggesting a crucial role for face-sensitive mechanisms operating outside of conscious awareness. These findings indicate that visual awareness of faces in adults is governed by perceptual mechanisms that are sensitive to similar stimulus properties as those modulating newborns' face preferences.  相似文献   

13.
A key to understanding visual cognition is to determine when, how, and with what information the human brain distinguishes between visual categories. So far, the dynamics of information processing for categorization of visual stimuli has not been elucidated. By using an ecologically important categorization task (seven expressions of emotion), we demonstrate, in three human observers, that an early brain event (the N170 Event Related Potential, occurring 170 ms after stimulus onset) integrates visual information specific to each expression, according to a pattern. Specifically, starting 50 ms prior to the ERP peak, facial information tends to be integrated from the eyes downward in the face. This integration stops, and the ERP peaks, when the information diagnostic for judging a particular expression has been integrated (e.g., the eyes in fear, the corners of the nose in disgust, or the mouth in happiness). Consequently, the duration of information integration from the eyes down determines the latency of the N170 for each expression (e.g., with "fear" being faster than "disgust," itself faster than "happy"). For the first time in visual categorization, we relate the dynamics of an important brain event to the dynamics of a precise information-processing function.  相似文献   

14.

Background

Some studies have reported gender differences in N170, a face-selective event-related potential (ERP) component. This study investigated gender differences in N170 elicited under oddball paradigm in order to clarify the effect of task demand on gender differences in early facial processing.

Findings

Twelve males and 10 females discriminated targets (emotional faces) from non-targets (emotionally neutral faces) under an oddball paradigm, pressing a button as quickly as possible in response to the target. Clear N170 was elicited in response to target and non-target stimuli in both males and females. However, females showed more negative amplitude of N170 in response to target compared with non-target, while males did not show different N170 responses between target and non-target.

Conclusions

The present results suggest that females have a characteristic of allocating attention at an early stage when responding to faces actively (target) compared to viewing faces passively (non-target). This supports previous findings suggesting that task demand is an important factor in gender differences in N170.  相似文献   

15.
There appears to be a significant disconnect between symptomatic and functional recovery in bipolar disorder (BD). Some evidence points to interepisode cognitive dysfunction. We tested the hypothesis that some of this dysfunction was related to emotional reactivity in euthymic bipolar subjects may effect cognitive processing. A modification of emotional gender categorization oddball task was used. The target was gender (probability 25%) of faces with negative, positive, and neutral emotional expression. The experiment had 720 trials (3 blocks × 240 trials each). Each stimulus was presented for 150 ms, and the EEG/ERP responses were recorded for 1,000 ms. The inter-trial interval was varied in 1,100–1,500 ms range to avoid expectancy effects. Task took about 35 min to complete. There were 9 BD and 9 control subjects matched for age and gender. Reaction time (RT) was globally slower in BD subjects. The centro-parietal amplitudes at N170 and N200, and P200 and P300 were generally smaller in the BD group compared to controls. Latency was shorter to neutral and negative targets in BD. Frontal P200 amplitude was higher to emotional negative facial non-targets in BD subjects. The frontal N200 in response to positive facial emotion was less negative in BD subjects. The frontal P300 of BD subjects was lower to emotionally neutral targets. ERP responses to facial emotion in BD subjects varied significantly from normal controls. These variations are consistent with the common depressive symptomology seen in long term studies of bipolar subjects.  相似文献   

16.

Background

Faces, as socially relevant stimuli, readily capture human visuospatial attention. Although faces also play important roles in the social lives of chimpanzees, the closest living species to humans, the way in which faces are attentionally processed remains unclear from a comparative-cognitive perspective. In the present study, three young chimpanzees (Pan troglodytes) were tested with a simple manual response task in which various kinds of photographs, including faces as non-informative cues, were followed by a target.

Results

When the target appeared at the location that had been occupied by the face immediately before target onset, response times were significantly faster than when the target appeared at the opposite location that had been by the other object. Such an advantage was not observed when a photograph of a banana was paired with the other object. Furthermore, this attentional capture was also observed when upright human faces were presented, indicating that this effect is not limited to own-species faces. On the contrary, when the participants were tested with inverted chimpanzee faces, this effect was rather weakened, suggesting the specificity to upright faces.

Conclusion

Chimpanzee's visuospatial attention was easily captured by the face stimuli. This effect was face specific and stronger for upright faces than inverted. These results are consistent with those from typically developing humans.  相似文献   

17.
The discrimination of thatcherized faces from typical faces was explored in two simultaneous alternative forced choice tasks. Reaction times (RTs) and errors were measured in a behavioural task. Brain activation was measured in an equivalent fMRI task. In both tasks, participants were tested with upright and inverted faces. Participants were also tested on churches in the behavioural task. The behavioural task confirmed the face specificity of the illusion (by comparing inversion effects for faces against churches) but also demonstrated that the discrimination was primarily, although not exclusively, driven by attending to eyes. The fMRI task showed that, relative to inverted faces, upright grotesque faces are discriminated via activation of a network of emotion/social evaluation processing areas. On the other hand, discrimination of inverted thatcherized faces was associated with increased activation of brain areas that are typically involved in perceptual processing of faces.  相似文献   

18.
Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique that can modulate cortical excitability. Although the clinical value of tDCS has been advocated, the potential of tDCS in cognitive rehabilitation of face processing deficits is less understood. Face processing has been associated with the occipito-temporal cortex (OT). The present study investigated whether face processing in healthy adults can be modulated by applying tDCS over the OT. Experiment 1 investigated whether tDCS can affect N170, a face-sensitive ERP component, with a face orientation judgment task. The N170 in the right hemisphere was reduced in active stimulation conditions compared with the sham stimulation condition for both upright faces and inverted faces. Experiment 2 further demonstrated that tDCS can modulate the composite face effect, a type of holistic processing that reflects the obligatory attention to all parts of a face. The composite face effect was reduced in active stimulation conditions compared with the sham stimulation condition. Additionally, the current polarity did not modulate the effect of tDCS in the two experiments. The present study demonstrates that N170 can be causally manipulated by stimulating the OT with weak currents. Furthermore, our study provides evidence that obligatory attention to all parts of a face can be affected by the commonly used tDCS parameter setting.  相似文献   

19.
Recognition and individuation of conspecifics by their face is essential for primate social cognition. This ability is driven by a mechanism that integrates the appearance of facial features with subtle variations in their configuration (i.e., second-order relational properties) into a holistic representation. So far, there is little evidence of whether our evolutionary ancestors show sensitivity to featural spatial relations and hence holistic processing of faces as shown in humans. Here, we directly compared macaques with humans in their sensitivity to configurally altered faces in upright and inverted orientations using a habituation paradigm and eye tracking technologies. In addition, we tested for differences in processing of conspecific faces (human faces for humans, macaque faces for macaques) and non-conspecific faces, addressing aspects of perceptual expertise. In both species, we found sensitivity to second-order relational properties for conspecific (expert) faces, when presented in upright, not in inverted, orientation. This shows that macaques possess the requirements for holistic processing, and thus show similar face processing to that of humans.  相似文献   

20.
Human observers are remarkably proficient at recognizing expressions of emotions and at readily grouping them into distinct categories. When morphing one facial expression into another, the linear changes in low-level features are insufficient to describe the changes in perception, which instead follow an s-shaped function. Important questions are, whether there are single diagnostic regions in the face that drive categorical perception for certain parings of emotion expressions, and how information in those regions interacts when presented together. We report results from two experiments with morphed fear-anger expressions, where (a) half of the face was masked or (b) composite faces made up of different expressions were presented. When isolated upper and lower halves of faces were shown, the eyes were found to be almost as diagnostic as the whole face, with the response function showing a steep category boundary. In contrast, the mouth allowed for a substantially lesser amount of accuracy and responses followed a much flatter psychometric function. When a composite face consisting of mismatched upper and lower halves was used and observers were instructed to exclusively judge either the expression of mouth or eyes, the to-be-ignored part always influenced perception of the target region. In line with experiment 1, the eye region exerted a much stronger influence on mouth judgements than vice versa. Again, categorical perception was significantly more pronounced for upper halves of faces. The present study shows that identification of fear and anger in morphed faces relies heavily on information from the upper half of the face, most likely the eye region. Categorical perception is possible when only the upper face half is present, but compromised when only the lower part is shown. Moreover, observers tend to integrate all available features of a face, even when trying to focus on only one part.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号