首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
What are the species boundaries of face processing? Using a face-feature morphing algorithm, image series intermediate between human, monkey (macaque), and bovine faces were constructed. Forced-choice judgement of these images showed sharply bounded categories for upright face images of each species. These predicted the perceptual discrimination boundaries for upright monkey-cow and cow-human images, but not human-monkey images. Species categories were also well-judged for inverted face images, but these did not give sharpened discrimination (categorical perception) at the category boundaries. While categorical species judgements are made reliably, only the distinction between primate faces and cow faces appears to be categorically perceived, and only in upright faces. One inference is that humans may judge monkey faces in terms of human characteristics, albeit distinctive ones.  相似文献   

2.
3.
The amygdala has been regarded as a key substrate for emotion processing. However, the engagement of the left and right amygdala during the early perceptual processing of different emotional faces remains unclear. We investigated the temporal profiles of oscillatory gamma activity in the amygdala and effective connectivity of the amygdala with the thalamus and cortical areas during implicit emotion-perceptual tasks using event-related magnetoencephalography (MEG). We found that within 100 ms after stimulus onset the right amygdala habituated to emotional faces rapidly (with duration around 20–30 ms), whereas activity in the left amygdala (with duration around 50–60 ms) sustained longer than that in the right. Our data suggest that the right amygdala could be linked to autonomic arousal generated by facial emotions and the left amygdala might be involved in decoding or evaluating expressive faces in the early perceptual emotion processing. The results of effective connectivity provide evidence that only negative emotional processing engages both cortical and subcortical pathways connected to the right amygdala, representing its evolutional significance (survival). These findings demonstrate the asymmetric engagement of bilateral amygdala in emotional face processing as well as the capability of MEG for assessing thalamo-cortico-limbic circuitry.  相似文献   

4.

Background

The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking.

Methodology/Principal Findings

In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment.

Conclusions/Significance

Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species.  相似文献   

5.
In low-level vision, exquisite sensitivity to variation in luminance is achieved by adaptive mechanisms that adjust neural sensitivity to the prevailing luminance level. In high-level vision, adaptive mechanisms contribute to our remarkable ability to distinguish thousands of similar faces [1]. A clear example of this sort of adaptive coding is the face-identity aftereffect [2, 3, 4, 5], in which adaptation to a particular face biases perception toward the opposite identity. Here we investigated face adaptation in children with autism spectrum disorder (ASD) by asking them to discriminate between two face identities, with and without prior adaptation to opposite-identity faces. The ASD group discriminated the identities with the same precision as did the age- and ability-matched control group, showing that face identification per se was unimpaired. However, children with ASD showed significantly less adaptation than did their typical peers, with the amount of adaptation correlating significantly with current symptomatology, and face aftereffects of children with elevated symptoms only one third those of controls. These results show that although children with ASD can learn a simple discrimination between two identities, adaptive face-coding mechanisms are severely compromised, offering a new explanation for previously reported face-perception difficulties [6, 7, 8] and possibly for some of the core social deficits in ASD [9, 10].  相似文献   

6.
The identity of an object is a fixed property, independent of where it appears, and an effective visual system should capture this invariance [1-3]. However, we now report that the perceived gender of a face is strongly biased toward male or female at different locations in the visual field. The spatial pattern of these biases was distinctive and stable for each individual. Identical neutral faces looked different when they were presented simultaneously at locations maximally biased to opposite genders. A similar effect was observed for perceived age of faces. We measured the magnitude of this perceptual heterogeneity for four other visual judgments: perceived aspect ratio, orientation discrimination, spatial-frequency discrimination, and color discrimination. The effect was sizeable for the aspect ratio task but substantially smaller for the other three tasks. We also evaluated perceptual heterogeneity for facial gender and orientation tasks at different spatial scales. Strong heterogeneity was observed even for the orientation task when tested at small scales. We suggest that perceptual heterogeneity is a general property of visual perception and results from undersampling of the visual signal at spatial scales that are small relative to the size of the receptive fields associated with each visual attribute.  相似文献   

7.
Modularity of face processing is still a controversial issue. Congenital prosopagnosia (cPA), a selective and lifelong impairment in familiar face recognition without evidence of an acquired cerebral lesion, offers a unique opportunity to support this fundamental hypothesis. However, in spite of the pronounced behavioural impairment, identification of a functionally relevant neural alteration in congenital prosopagnosia by electrophysiogical methods has not been achieved so far. Here we show that persons with congenital prosopagnosia can be distinguished as a group from unimpaired persons using magnetoencephalography. Early face-selective MEG-responses in the range of 140 to 200ms (the M170) showed prolonged latency and decreased amplitude whereas responses to another category (houses) were indistinguishable between subjects with congenital prosopagnosia and unimpaired controls. Latency and amplitude of face-selective EEG responses (the N170) which were simultaneously recorded were statistically indistinguishable between subjects with cPA and healthy controls which resolves heterogeneous and partly conflicting results from existing studies. The complementary analysis of categorical differences (evoked activity to faces minus evoked activity to houses) revealed that the early part of the 170ms response to faces is altered in subjects with cPA. This finding can be adequately explained in a common framework of holistic and part-based face processing. Whereas a significant brain-behaviour correlation of face recognition performance and the size of the M170 amplitude is found in controls a corresponding correlation is not seen in subjects with cPA. This indicates functional relevance of the alteration found for the 170ms response to faces in cPA and pinpoints the impairment of face processing to early perceptual stages.  相似文献   

8.
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.  相似文献   

9.
Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer''s point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias.  相似文献   

10.
Human observers are remarkably proficient at recognizing expressions of emotions and at readily grouping them into distinct categories. When morphing one facial expression into another, the linear changes in low-level features are insufficient to describe the changes in perception, which instead follow an s-shaped function. Important questions are, whether there are single diagnostic regions in the face that drive categorical perception for certain parings of emotion expressions, and how information in those regions interacts when presented together. We report results from two experiments with morphed fear-anger expressions, where (a) half of the face was masked or (b) composite faces made up of different expressions were presented. When isolated upper and lower halves of faces were shown, the eyes were found to be almost as diagnostic as the whole face, with the response function showing a steep category boundary. In contrast, the mouth allowed for a substantially lesser amount of accuracy and responses followed a much flatter psychometric function. When a composite face consisting of mismatched upper and lower halves was used and observers were instructed to exclusively judge either the expression of mouth or eyes, the to-be-ignored part always influenced perception of the target region. In line with experiment 1, the eye region exerted a much stronger influence on mouth judgements than vice versa. Again, categorical perception was significantly more pronounced for upper halves of faces. The present study shows that identification of fear and anger in morphed faces relies heavily on information from the upper half of the face, most likely the eye region. Categorical perception is possible when only the upper face half is present, but compromised when only the lower part is shown. Moreover, observers tend to integrate all available features of a face, even when trying to focus on only one part.  相似文献   

11.
Whether face adaptation confers any advantages to perceptual processing remains an open question. We investigated whether face adaptation can enhance the ability to make fine discriminations in the vicinity of the adapted face. We compared face discrimination thresholds in three adapting conditions: (i) same-face: where adapting and test faces were the same, (ii) different-face: where adapting and test faces differed, and (iii) baseline: where the adapting stimulus was a blank. Discrimination thresholds for morphed identity changes involving the adapted face (same-face) improved compared with those from both the baseline (no-adaptation) and different-face conditions. Since adapting to a face did not alter discrimination performance for other faces, this effect is selective for the facial identity that is adapted. These results indicate a form of gain control to heighten perceptual sensitivity in the vicinity of a currently viewed face, analogous to forms of adaptive gain control at lower levels of the visual system.  相似文献   

12.
M Latinus  P Belin 《PloS one》2012,7(7):e41384
Humans can identify individuals from their voice, suggesting the existence of a perceptual representation of voice identity. We used perceptual aftereffects - shifts in perceived stimulus quality after brief exposure to a repeated adaptor stimulus - to further investigate the representation of voice identity in two experiments. Healthy adult listeners were familiarized with several voices until they reached a recognition criterion. They were then tested on identification tasks that used vowel stimuli generated by morphing between the different identities, presented either in isolation (baseline) or following short exposure to different types of voice adaptors (adaptation). Experiment 1 showed that adaptation to a given voice induced categorization shifts away from that adaptor's identity even when the adaptors consisted of vowels different from the probe stimuli. Moreover, original voices and caricatures resulted in comparable aftereffects, ruling out an explanation of identity aftereffects in terms of adaptation to low-level features. In Experiment 2, we show that adaptors with a disrupted configuration, i.e., altered fundamental frequency or formant frequencies, failed to produce perceptual aftereffects showing the importance of the preserved configuration of these acoustical cues in the representation of voices. These two experiments indicate a high-level, dynamic representation of voice identity based on the combination of several lower-level acoustical features into a specific voice configuration.  相似文献   

13.
Cognitive theories of depression posit that perception is negatively biased in depressive disorder. Previous studies have provided empirical evidence for this notion, but left open the question whether the negative perceptual bias reflects a stable trait or the current depressive state. Here we investigated the stability of negatively biased perception over time. Emotion perception was examined in patients with major depressive disorder (MDD) and healthy control participants in two experiments. In the first experiment subjective biases in the recognition of facial emotional expressions were assessed. Participants were presented with faces that were morphed between sad and neutral and happy expressions and had to decide whether the face was sad or happy. The second experiment assessed automatic emotion processing by measuring the potency of emotional faces to gain access to awareness using interocular suppression. A follow-up investigation using the same tests was performed three months later. In the emotion recognition task, patients with major depression showed a shift in the criterion for the differentiation between sad and happy faces: In comparison to healthy controls, patients with MDD required a greater intensity of the happy expression to recognize a face as happy. After three months, this negative perceptual bias was reduced in comparison to the control group. The reduction in negative perceptual bias correlated with the reduction of depressive symptoms. In contrast to previous work, we found no evidence for preferential access to awareness of sad vs. happy faces. Taken together, our results indicate that MDD-related perceptual biases in emotion recognition reflect the current clinical state rather than a stable depressive trait.  相似文献   

14.
Faces are visual objects that hold special significance as the icons of other minds. Previous researchers using event-related potentials (ERPs) have found that faces are uniquely associated with an increased N170/vertex positive potential (VPP) and a more sustained frontal positivity. Here, we examined the processing of faces as objects vs. faces as cues to minds by contrasting images of faces possessing minds (human faces), faces lacking minds (doll faces), and non-face objects (i.e., clocks). Although both doll and human faces were associated with an increased N170/VPP from 175-200 ms following stimulus onset, only human faces were associated with a sustained positivity beyond 400 ms. Our data suggest that the N170/VPP reflects the object-based processing of faces, whether of dolls or humans; on the other hand, the later positivity appears to uniquely index the processing of human faces--which are more salient and convey information about identity and the presence of other minds.  相似文献   

15.
The visual system is tuned for rapid detection of faces, with the fastest choice saccade to a face at 100ms. Familiar faces have a more robust representation than do unfamiliar faces, and are detected faster in the absence of awareness and with reduced attentional resources. Faces of family and close friends become familiar over a protracted period involving learning the unique visual appearance, including a view-invariant representation, as well as person knowledge. We investigated the effect of personal familiarity on the earliest stages of face processing by using a saccadic-choice task to measure how fast familiar face detection can happen. Subjects made correct and reliable saccades to familiar faces when unfamiliar faces were distractors at 180ms—very rapid saccades that are 30 to 70ms earlier than the earliest evoked potential modulated by familiarity. By contrast, accuracy of saccades to unfamiliar faces with familiar faces as distractors did not exceed chance. Saccades to faces with object distractors were even faster (110 to 120 ms) and equivalent for familiar and unfamiliar faces, indicating that familiarity does not affect ultra-rapid saccades. We propose that detectors of diagnostic facial features for familiar faces develop in visual cortices through learning and allow rapid detection that precedes explicit recognition of identity.  相似文献   

16.
Neural manifestations of memory with and without awareness   总被引:6,自引:0,他引:6  
Paller KA  Hutson CA  Miller BB  Boehm SG 《Neuron》2003,38(3):507-516
Neurophysiological events responsible for different types of human memory tend to occur concurrently and are therefore difficult to measure independently. To surmount this problem, we produced perceptual priming (indicated by speeded responses) in the absence of conscious remembering. At encoding, faces appeared briefly while subjects' attention was diverted to other stimuli. Faces appeared again in either an implicit or explicit memory test. Neural correlates of priming were identified as brain potentials beginning 270 ms after face onset with more negative amplitudes for repeated than for new faces. Remembered faces, in contrast, activated a different configuration of intracranial sources producing positive potentials maximal at 600-700 ms. We thus disentangled and characterized distinct neural events associated with memory with and without awareness.  相似文献   

17.
Recent evidence suggests that while reflectance information (including color) may be more diagnostic for familiar face recognition, shape may be more diagnostic for unfamiliar face identity processing. Moreover, event-related potential (ERP) findings suggest an earlier onset for neural processing of facial shape compared to reflectance. In the current study, we aimed to explore specifically the roles of facial shape and color in a familiarity decision task using pre-experimentally familiar (famous) and unfamiliar faces that were caricatured either in shape-only, color-only, or both (full; shape + color) by 15%, 30%, or 45%. We recorded accuracies, mean reaction times, and face-sensitive ERPs. Performance data revealed that shape caricaturing facilitated identity processing for unfamiliar faces only. In the ERP data, such effects of shape caricaturing emerged earlier than those of color caricaturing. Unsurprisingly, ERP effects were accentuated for larger levels of caricaturing. Overall, our findings corroborate the importance of shape for identity processing of unfamiliar faces and demonstrate an earlier onset of neural processing for facial shape compared to color.  相似文献   

18.
Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus.  相似文献   

19.
The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations have not been demonstrated yet. In three experiments we investigated the influence of emotional voice as well as dynamic facial video adaptors on the perception of emotion-ambiguous voices morphed on an angry-to-happy continuum. Contrastive aftereffects were found for unimodal (voice) adaptation conditions, in that test voices were perceived as happier after adaptation to angry voices, and vice versa. Bimodal (voice + dynamic face) adaptors tended to elicit larger contrastive aftereffects. Importantly, crossmodal (dynamic face) adaptors also elicited substantial aftereffects in male, but not in female participants. Our results (1) support the idea of contrastive processing of emotions (2), show for the first time crossmodal adaptation effects under certain conditions, consistent with the idea that emotion processing is multimodal in nature, and (3) suggest gender differences in the sensory integration of facial and vocal emotional stimuli.  相似文献   

20.
Chemosensory communication of anxiety is a common phenomenon in vertebrates and improves perceptual and responsive behaviour in the perceiver in order to optimize ontogenetic survival. A few rating studies reported a similar phenomenon in humans. Here, we investigated whether subliminal face perception changes in the context of chemosensory anxiety signals. Axillary sweat samples were taken from 12 males while they were waiting for an academic examination and while exercising ergometric training some days later. 16 subjects (eight females) participated in an emotional priming study, using happy, fearful and sad facial expressions as primes (11.7 ms) and neutral faces as targets (47 ms). The pooled chemosensory samples were presented before and during picture presentation (920 ms). In the context of chemosensory stimuli derived from sweat samples taken during the sport condition, subjects judged the targets significantly more positive when they were primed by a happy face than when they were primed by the negative facial expressions (P = 0.02). In the context of the chemosensory anxiety signals, the priming effect of the happy faces was diminished in females (P = 0.02), but not in males. It is discussed whether, in socially relevant ambiguous perceptual conditions, chemosensory signals have a processing advantage and dominate visual signals or whether fear signals in general have a stronger behavioural impact than positive signals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号