首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

When viewing complex scenes, East Asians attend more to contexts whereas Westerners attend more to objects, reflecting cultural differences in holistic and analytic visual processing styles respectively. This eye-tracking study investigated more specific mechanisms and the robustness of these cultural biases in visual processing when salient changes in the objects and backgrounds occur in complex pictures.

Methodology/Principal Findings

Chinese Singaporean (East Asian) and Caucasian US (Western) participants passively viewed pictures containing selectively changing objects and background scenes that strongly captured participants'' attention in a data-driven manner. We found that although participants from both groups responded to object changes in the pictures, there was still evidence for cultural divergence in eye-movements. The number of object fixations in the US participants was more affected by object change than in the Singapore participants. Additionally, despite the picture manipulations, US participants consistently maintained longer durations for both object and background fixations, with eye-movements that generally remained within the focal objects. In contrast, Singapore participants had shorter fixation durations with eye-movements that alternated more between objects and backgrounds.

Conclusions/Significance

The results demonstrate a robust cultural bias in visual processing even when external stimuli draw attention in an opposite manner to the cultural bias. These findings also extend previous studies by revealing more specific, but consistent, effects of culture on the different aspects of visual attention as measured by fixation duration, number of fixations, and saccades between objects and backgrounds.  相似文献   

2.

Background

Behavioural studies have highlighted irregularities in recognition of facial affect in children and young people with autism spectrum disorders (ASDs). Recent findings from studies utilising electroencephalography (EEG) and magnetoencephalography (MEG) have identified abnormal activation and irregular maintenance of gamma (>30 Hz) range oscillations when ASD individuals attempt basic visual and auditory tasks.

Methodology/Principal Fndings

The pilot study reported here is the first study to use spatial filtering techniques in MEG to explore face processing in children with ASD. We set out to examine theoretical suggestions that gamma activation underlying face processing may be different in a group of children and young people with ASD (n = 13) compared to typically developing (TD) age, gender and IQ matched controls. Beamforming and virtual electrode techniques were used to assess spatially localised induced and evoked activity. While lower-band (3–30 Hz) responses to faces were similar between groups, the ASD gamma response in occipital areas was observed to be largely absent when viewing emotions on faces. Virtual electrode analysis indicated the presence of intact evoked responses but abnormal induced activity in ASD participants.

Conclusions/Significance

These findings lend weight to previous suggestions that specific components of the early visual response to emotional faces is abnormal in ASD. Elucidation of the nature and specificity of these findings is worthy of further research.  相似文献   

3.
Balas B  Cox D  Conwell E 《PloS one》2007,2(11):e1223

Background

Previous studies have explored the effects of familiarity on various kinds of visual face judgments, yet the role of familiarity in face processing is not fully understood. Across different face judgments and stimulus sets, the data is equivocal as to whether or not familiarity impacts recognition processes.

Methodology/Principal Findings

Here, we examine the effect of real-world personal familiarity in three simple delayed-match-to-sample tasks in which subjects were required to match faces on the basis of orientation (upright v. inverted), gender and identity. We find that subjects had a significant speed advantage with familiar faces in all three tasks, with large effects for the gender and identity matching tasks.

Conclusion/Significance

Our data indicates that real-world experience with a face exerts a powerful influence on face processing in tasks where identity information is irrelevant, even in tasks that could in principle be solved via low-level cues. These results underscore the importance of experience in shaping visual recognition processes.  相似文献   

4.

Background

Experience can alter how objects are represented in the visual cortex. But experience can take different forms. It is unknown whether the kind of visual experience systematically alters the nature of visual cortical object representations.

Methodology/Principal Findings

We take advantage of different training regimens found to produce qualitatively different types of perceptual expertise behaviorally in order to contrast the neural changes that follow different kinds of visual experience with the same objects. Two groups of participants went through training regimens that required either subordinate-level individuation or basic-level categorization of a set of novel, artificial objects, called “Ziggerins”. fMRI activity of a region in the right fusiform gyrus increased after individuation training and was correlated with the magnitude of configural processing of the Ziggerins observed behaviorally. In contrast, categorization training caused distributed changes, with increased activity in the medial portion of the ventral occipito-temporal cortex relative to more lateral areas.

Conclusions/Significance

Our results demonstrate that the kind of experience with a category of objects can systematically influence how those objects are represented in visual cortex. The demands of prior learning experience therefore appear to be one factor determining the organization of activity patterns in visual cortex.  相似文献   

5.

Background

Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used.

Methodology/Principal Findings

We tested this hypothesis by scanning healthy human participants'' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants'' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position.

Conclusions/Significance

These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use.  相似文献   

6.
Ganel T  Freud E  Chajut E  Algom D 《PloS one》2012,7(4):e36253

Background

Human resolution for object size is typically determined by psychophysical methods that are based on conscious perception. In contrast, grasping of the same objects might be less conscious. It is suggested that grasping is mediated by mechanisms other than those mediating conscious perception. In this study, we compared the visual resolution for object size of the visuomotor and the perceptual system.

Methodology/Principal Findings

In Experiment 1, participants discriminated the size of pairs of objects once through perceptual judgments and once by grasping movements toward the objects. Notably, the actual size differences were set below the Just Noticeable Difference (JND). We found that grasping trajectories reflected the actual size differences between the objects regardless of the JND. This pattern was observed even in trials in which the perceptual judgments were erroneous. The results of an additional control experiment showed that these findings were not confounded by task demands. Participants were not aware, therefore, that their size discrimination via grasp was veridical.

Conclusions/Significance

We conclude that human resolution is not fully tapped by perceptually determined thresholds. Grasping likely exhibits greater resolving power than people usually realize.  相似文献   

7.

Background

The present study sought to clarify the relationship between empathy trait and attention responses to happy, angry, surprised, afraid, and sad facial expressions. As indices of attention, we recorded event-related potentials (ERP) and focused on N170 and late positive potential (LPP) components.

Methods

Twenty-two participants (12 males, 10 females) discriminated facial expressions (happy, angry, surprised, afraid, and sad) from emotionally neutral faces under an oddball paradigm. The empathy trait of participants was measured using the Interpersonal Reactivity Index (IRI, J Pers Soc Psychol 44:113–126, 1983).

Results

Participants with higher IRI scores showed: 1) more negative amplitude of N170 (140 to 200 ms) in the right posterior temporal area elicited by happy, angry, surprised, and afraid faces; 2) more positive amplitude of early LPP (300 to 600 ms) in the parietal area elicited in response to angry and afraid faces; and 3) more positive amplitude of late LPP (600 to 800 ms) in the frontal area elicited in response to happy, angry, surprised, afraid, and sad faces, compared to participants with lower IRI scores.

Conclusions

These results suggest that individuals with high empathy pay attention to various facial expressions more than those with low empathy, from very-early stage (reflected in N170) to late-stage (reflected in LPP) processing of faces.  相似文献   

8.

Background

A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.

Methodology/Principal Findings

We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.

Conclusions/Significance

These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.  相似文献   

9.

Background

Since the pioneering study by Rosch and colleagues in the 70s, it is commonly agreed that basic level perceptual categories (dog, chair…) are accessed faster than superordinate ones (animal, furniture…). Nevertheless, the speed at which objects presented in natural images can be processed in a rapid go/no-go visual superordinate categorization task has challenged this “basic level advantage”.

Principal Findings

Using the same task, we compared human processing speed when categorizing natural scenes as containing either an animal (superordinate level), or a specific animal (bird or dog, basic level). Human subjects require an additional 40–65 ms to decide whether an animal is a bird or a dog and most errors are induced by non-target animals. Indeed, processing time is tightly linked with the type of non-targets objects. Without any exemplar of the same superordinate category to ignore, the basic level category is accessed as fast as the superordinate category, whereas the presence of animal non-targets induces both an increase in reaction time and a decrease in accuracy.

Conclusions and Significance

These results support the parallel distributed processing theory (PDP) and might reconciliate controversial studies recently published. The visual system can quickly access a coarse/abstract visual representation that allows fast decision for superordinate categorization of objects but additional time-consuming visual analysis would be necessary for a decision at the basic level based on more detailed representations.  相似文献   

10.

Background

Do peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance.

Methodology

Participants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active).

Principal Findings

Comfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants.

Conclusions

These findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space.  相似文献   

11.

Background

The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.

Methodology

Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.

Principal Findings

Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca''s area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.

Conclusions

Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.

Significance

Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.  相似文献   

12.

Background

Some studies have reported gender differences in N170, a face-selective event-related potential (ERP) component. This study investigated gender differences in N170 elicited under oddball paradigm in order to clarify the effect of task demand on gender differences in early facial processing.

Findings

Twelve males and 10 females discriminated targets (emotional faces) from non-targets (emotionally neutral faces) under an oddball paradigm, pressing a button as quickly as possible in response to the target. Clear N170 was elicited in response to target and non-target stimuli in both males and females. However, females showed more negative amplitude of N170 in response to target compared with non-target, while males did not show different N170 responses between target and non-target.

Conclusions

The present results suggest that females have a characteristic of allocating attention at an early stage when responding to faces actively (target) compared to viewing faces passively (non-target). This supports previous findings suggesting that task demand is an important factor in gender differences in N170.  相似文献   

13.

Background

Major depressive disorder (MDD) is associated with a mood-congruent processing bias in the amygdala toward face stimuli portraying sad expressions that is evident even when such stimuli are presented below the level of conscious awareness. The extended functional anatomical network that maintains this response bias has not been established, however.

Aims

To identify neural network differences in the hemodynamic response to implicitly presented facial expressions between depressed and healthy control participants.

Method

Unmedicated-depressed participants with MDD (n = 22) and healthy controls (HC; n = 25) underwent functional MRI as they viewed face stimuli showing sad, happy or neutral face expressions, presented using a backward masking design. The blood-oxygen-level dependent (BOLD) signal was measured to identify regions where the hemodynamic response to the emotionally valenced stimuli differed between groups.

Results

The MDD subjects showed greater BOLD responses than the controls to masked-sad versus masked-happy faces in the hippocampus, amygdala and anterior inferotemporal cortex. While viewing both masked-sad and masked-happy faces relative to masked-neutral faces, the depressed subjects showed greater hemodynamic responses than the controls in a network that included the medial and orbital prefrontal cortices and anterior temporal cortex.

Conclusions

Depressed and healthy participants showed distinct hemodynamic responses to masked-sad and masked-happy faces in neural circuits known to support the processing of emotionally valenced stimuli and to integrate the sensory and visceromotor aspects of emotional behavior. Altered function within these networks in MDD may establish and maintain illness-associated differences in the salience of sensory/social stimuli, such that attention is biased toward negative and away from positive stimuli.  相似文献   

14.

Background

Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered.

Method

Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects.

Results

Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group.

Conclusions

deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.  相似文献   

15.

Background

Theories of categorization make different predictions about the underlying processes used to represent categories. Episodic theories suggest that categories are represented in memory by storing previously encountered exemplars in memory. Prototype theories suggest that categories are represented in the form of a prototype independently of memory. A number of studies that show dissociations between categorization and recognition are often cited as evidence for the prototype account. These dissociations have compared recognition judgements made to one set of items to categorization judgements to a different set of items making a clear interpretation difficult. Instead of using different stimuli for different tests this experiment compares the processes by which participants make decisions about category membership in a prototype-distortion task and with recognition decisions about the same set of stimuli by examining the Event Related Potentials (ERPs) associated with them.

Method

Sixty-three participants were asked to make categorization or recognition decisions about stimuli that either formed an artificial category or that were category non-members. We examined the ERP components associated with both kinds of decision for pre-exposed and control participants.

Conclusion

In contrast to studies using different items we observed no behavioural differences between the two kinds of decision; participants were equally able to distinguish category members from non-members, regardless of whether they were performing a recognition or categorisation judgement. Interestingly, this did not interact with prior-exposure. However, the ERP data demonstrated that the early visual evoked response that discriminated category members from non-members was modulated by which judgement participants performed and whether they had been pre-exposed to category members. We conclude from this that any differences between categorization and recognition reflect differences in the information that participants focus on in the stimuli to make the judgements at test, rather than any differences in encoding or process.  相似文献   

16.

Background

One common criterion for classifying electrophysiological brain responses is based on the distinction between transient (i.e. event-related potentials, ERPs) and steady-state responses (SSRs). The generation of SSRs is usually attributed to the entrainment of a neural rhythm driven by the stimulus train. However, a more parsimonious account suggests that SSRs might result from the linear addition of the transient responses elicited by each stimulus. This study aimed to investigate this possibility.

Methodology/Principal Findings

We recorded brain potentials elicited by a checkerboard stimulus reversing at different rates. We modeled SSRs by sequentially shifting and linearly adding rate-specific ERPs. Our results show a strong resemblance between recorded and synthetic SSRs, supporting the superposition hypothesis. Furthermore, we did not find evidence of entrainment of a neural oscillation at the stimulation frequency.

Conclusions/Significance

This study provides evidence that visual SSRs can be explained as a superposition of transient ERPs. These findings have critical implications in our current understanding of brain oscillations. Contrary to the idea that neural networks can be tuned to a wide range of frequencies, our findings rather suggest that the oscillatory response of a given neural network is constrained within its natural frequency range.  相似文献   

17.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

18.

Background

The regulation of energy intake is a complex process involving the integration of homeostatic signals and both internal and external sensory inputs. The objective of this study was to examine the effects of short-term overfeeding on the neuronal response to food-related visual stimuli in individuals prone and resistant to weight gain.

Methodology/Principal Findings

22 thin and 19 reduced-obese (RO) individuals were studied. Functional magnetic resonance imaging (fMRI) was performed in the fasted state after two days of eucaloric energy intake and after two days of 30% overfeeding in a counterbalanced design. fMRI was performed while subjects viewed images of foods of high hedonic value and neutral non-food objects. In the eucaloric state, food as compared to non-food images elicited significantly greater activation of insula and inferior visual cortex in thin as compared to RO individuals. Two days of overfeeding led to significant attenuation of not only insula and visual cortex responses but also of hypothalamus response in thin as compared to RO individuals.

Conclusions/Significance

These findings emphasize the important role of food-related visual cues in ingestive behavior and suggest that there are important phenotypic differences in the interactions between external visual sensory inputs, energy balance status, and brain regions involved in the regulation of energy intake. Furthermore, alterations in the neuronal response to food cues may relate to the propensity to gain weight.  相似文献   

19.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号