首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Kim J  Park S  Blake R 《PloS one》2011,6(5):e19971

Background

Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits.

Methodology/Findings

In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task.Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion.

Conclusion

Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes.  相似文献   

2.

Background

Several studies bring evidence that action observation elicits contagious responses during social interactions. However automatic imitative tendencies are generally inhibited and it remains unclear in which conditions mere action observation triggers motor behaviours. In this study, we addressed the question of contagious postural responses when observing human imbalance.

Methodology/Principal Findings

We recorded participants'' body sway while they observed a fixation cross (control condition), an upright point-light display of a gymnast balancing on a rope, and the same point-light display presented upside down. Our results showed that, when the upright stimulus was displayed prior to the inverted one, centre of pressure area and antero-posterior path length were significantly greater in the upright condition compared to the control and upside down conditions.

Conclusions/Significance

These results demonstrate a contagious postural reaction suggesting a partial inefficiency of inhibitory processes. Further, kinematic information was sufficient to trigger this reaction. The difference recorded between the upright and upside down conditions indicates that the contagion effect was dependent on the integration of gravity constraints by body kinematics. Interestingly, the postural response was sensitive to habituation, and seemed to disappear when the observer was previously shown an inverted display. The motor contagion recorded here is consistent with previous work showing vegetative output during observation of an effortful movement and could indicate that lower level control facilitates contagion effects.  相似文献   

3.

Background

Congenital prosopagnosia (CP) describes an impairment in face processing that is presumably present from birth. The neuronal correlates of this dysfunction are still under debate. In the current paper, we investigate high-frequent oscillatory activity in response to faces in persons with CP. Such neuronal activity is thought to reflect higher-level representations for faces.

Methodology

Source localization of induced Gamma-Band Responses (iGBR) measured by magnetoencephalography (MEG) was used to establish the origin of oscillatory activity in response to famous and unknown faces which were presented in upright and inverted orientation. Persons suffering from congenital prosopagnosia (CP) were compared to matched controls.

Principal Findings

Corroborating earlier research, both groups revealed amplified iGBR in response to upright compared to inverted faces predominately in a time interval between 170 and 330 ms and in a frequency range from 50–100 Hz. Oscillatory activity upon known faces was smaller in comparison to unknown faces, suggesting a “sharpening” effect reflecting more efficient processing for familiar stimuli. These effects were seen in a wide cortical network encompassing temporal and parietal areas involved in the disambiguation of homogenous stimuli such as faces, and in the retrieval of semantic information. Importantly, participants suffering from CP displayed a strongly reduced iGBR in the left fusiform area compared to control participants.

Conclusions

In sum, these data stress the crucial role of oscillatory activity for face representation and demonstrate the involvement of a distributed occipito-temporo-parietal network in generating iGBR. This study also provides the first evidence that persons suffering from an agnosia actually display reduced gamma band activity. Finally, the results argue strongly against the view that oscillatory activity is a mere epiphenomenon brought fourth by rapid eye-movements (micro saccades).  相似文献   

4.

Background

Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.

Methodology/Principal Findings

Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.

Conclusions/Significance

These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order.  相似文献   

5.
Saygin AP  Cook J  Blakemore SJ 《PloS one》2010,5(10):e13491

Background

Perception of biological motion is linked to the action perception system in the human brain, abnormalities within which have been suggested to underlie impairments in social domains observed in autism spectrum conditions (ASC). However, the literature on biological motion perception in ASC is heterogeneous and it is unclear whether deficits are specific to biological motion, or might generalize to form-from-motion perception.

Methodology and Principal Findings

We compared psychophysical thresholds for both biological and non-biological form-from-motion perception in adults with ASC and controls. Participants viewed point-light displays depicting a walking person (Biological Motion), a translating rectangle (Structured Object) or a translating unfamiliar shape (Unstructured Object). The figures were embedded in noise dots that moved similarly and the task was to determine direction of movement. The number of noise dots varied on each trial and perceptual thresholds were estimated adaptively. We found no evidence for an impairment in biological or non-biological object motion perception in individuals with ASC. Perceptual thresholds in the three conditions were almost identical between the ASC and control groups.

Discussion and Conclusions

Impairments in biological motion and non-biological form-from-motion perception are not across the board in ASC, and are only found for some stimuli and tasks. We discuss our results in relation to other findings in the literature, the heterogeneity of which likely relates to the different tasks performed. It appears that individuals with ASC are unaffected in perceptual processing of form-from-motion, but may exhibit impairments in higher order judgments such as emotion processing. It is important to identify more specifically which processes of motion perception are impacted in ASC before a link can be made between perceptual deficits and the higher-level features of the disorder.  相似文献   

6.

Background

The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking.

Methodology/Principal Findings

In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment.

Conclusions/Significance

Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species.  相似文献   

7.
Balas B  Cox D  Conwell E 《PloS one》2007,2(11):e1223

Background

Previous studies have explored the effects of familiarity on various kinds of visual face judgments, yet the role of familiarity in face processing is not fully understood. Across different face judgments and stimulus sets, the data is equivocal as to whether or not familiarity impacts recognition processes.

Methodology/Principal Findings

Here, we examine the effect of real-world personal familiarity in three simple delayed-match-to-sample tasks in which subjects were required to match faces on the basis of orientation (upright v. inverted), gender and identity. We find that subjects had a significant speed advantage with familiar faces in all three tasks, with large effects for the gender and identity matching tasks.

Conclusion/Significance

Our data indicates that real-world experience with a face exerts a powerful influence on face processing in tasks where identity information is irrelevant, even in tasks that could in principle be solved via low-level cues. These results underscore the importance of experience in shaping visual recognition processes.  相似文献   

8.
E Park  G Schöner  JP Scholz 《PloS one》2012,7(8):e41583

Background

Studies of human upright posture typically have stressed the need to control ankle and hip joints to achieve postural stability. Recent studies, however, suggest that postural stability involves multi degree-of-freedom (DOF) coordination, especially when performing supra-postural tasks. This study investigated kinematic synergies related to control of the body’s position in space (two, four and six DOF models) and changes in the head’s orientation (six DOF model).

Methodology/Principal Findings

Subjects either tracked a vertically moving target with a head-mounted laser pointer or fixated a stationary point during 4-min trials. Uncontrolled manifold (UCM) analysis was performed across tracking cycles at each point in time to determine the structure of joint configuration variance related to postural stability or tracking consistency. The effect of simulated removal of covariance among joints on that structure was investigated to further determine the role of multijoint coordination. Results indicated that cervical joint motion was poorly coordinated with other joints to stabilize the position of the body center of mass (CM). However, cervical joints were coordinated in a flexible manner with more caudal joints to achieve consistent changes in head orientation.

Conclusions/Significance

An understanding of multijoint coordination requires reference to the stability/control of important performance variables. The nature of that coordination differs depending on the reference variable. Stability of upright posture primarily involved multijoint coordination of lower extremity and lower trunk joints. Consistent changes in the orientation of the head, however, required flexible coordination of those joints with motion of the cervical spine. A two-segment model of postural control was unable to account for the observed stability of the CM position during the tracking task, further supporting the need to consider multijoint coordination to understand postural stability.  相似文献   

9.

Background

Pharmacological studies suggest that cholinergic neurotransmission mediates increases in attentional effort in response to high processing load during attention demanding tasks [1].

Methodology/Principal Findings

In the present study we tested whether individual variation in CHRNA4, a gene coding for a subcomponent in α4β2 nicotinic receptors in the human brain, interacted with processing load in multiple-object tracking (MOT) and visual search (VS). We hypothesized that the impact of genotype would increase with greater processing load in the MOT task. Similarly, we predicted that genotype would influence performance under high but not low load in the VS task. Two hundred and two healthy persons (age range = 39–77, Mean = 57.5, SD = 9.4) performed the MOT task in which twelve identical circular objects moved about the display in an independent and unpredictable manner. Two to six objects were designated as targets and the remaining objects were distracters. The same observers also performed a visual search for a target letter (i.e. X or Z) presented together with five non-targets while ignoring centrally presented distracters (i.e. X, Z, or L). Targets differed from non-targets by a unique feature in the low load condition, whereas they shared features in the high load condition. CHRNA4 genotype interacted with processing load in both tasks. Homozygotes for the T allele (N = 62) had better tracking capacity in the MOT task and identified targets faster in the high load trials of the VS task.

Conclusion

The results support the hypothesis that the cholinergic system modulates attentional effort, and that common genetic variation can be used to study the molecular biology of cognition.  相似文献   

10.

Background

Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat.

Methodology/Principal Findings

Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds.

Conclusions/Significance

Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.  相似文献   

11.

Background

The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.

Methodology/Principal Findings

In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.

Conclusions/Significance

The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws.  相似文献   

12.

Background

Beyond providing cues about an agent''s intention, communicative actions convey information about the presence of a second agent towards whom the action is directed (second-agent information). In two psychophysical studies we investigated whether the perceptual system makes use of this information to infer the presence of a second agent when dealing with impoverished and/or noisy sensory input.

Methodology/Principal Findings

Participants observed point-light displays of two agents (A and B) performing separate actions. In the Communicative condition, agent B''s action was performed in response to a communicative gesture by agent A. In the Individual condition, agent A''s communicative action was replaced with a non-communicative action. Participants performed a simultaneous masking yes-no task, in which they were asked to detect the presence of agent B. In Experiment 1, we investigated whether criterion c was lowered in the Communicative condition compared to the Individual condition, thus reflecting a variation in perceptual expectations. In Experiment 2, we manipulated the congruence between A''s communicative gesture and B''s response, to ascertain whether the lowering of c in the Communicative condition reflected a truly perceptual effect. Results demonstrate that information extracted from communicative gestures influences the concurrent processing of biological motion by prompting perception of a second agent (second-agent effect).

Conclusions/Significance

We propose that this finding is best explained within a Bayesian framework, which gives a powerful rationale for the pervasive role of prior expectations in visual perception.  相似文献   

13.

Background

Previous research has shown that individuals with Alzheimer''s disease (AD) develop visuospatial difficulties that affect their ability to mentally rotate objects. Surprisingly, the existing literature has generally ignored the impact of this mental rotation deficit on the ability of AD patients to recognize faces from different angles. Instead, the devastating loss of the ability to recognize friends and family members in AD has primarily been attributed to memory loss and agnosia in later stages of the disorder. The impact of AD on areas of the brain important for mental rotation should not be overlooked by face processing investigations – even in early stages of the disorder.

Methodology/Principal Findings

This study investigated the sensitivity of face processing in AD, young controls and older non-neurological controls to two changes of the stimuli – a rotation in depth and an inversion. The control groups showed a systematic effect of depth rotation, with errors increasing with the angle of rotation, and with inversion. The majority of the AD group was not impaired when faces were presented upright and no transformation in depth was required, and were most accurate when all faces were presented in frontal views, but accuracy was severely impaired with any rotation or inversion.

Conclusions/Significance

These results suggest that with the onset of AD, mental rotation difficulties arise that affect the ability to recognize faces presented at different angles. The finding that a frontal view is “preferred” by these patients provides a valuable communication strategy for health care workers.  相似文献   

14.

Background

While own-age faces have been reported to be better recognized than other-age faces, the underlying cause of this phenomenon remains unclear. One potential cause is holistic face processing, a special kind of perceptual and cognitive processing reserved for perceiving upright faces. Previous studies have indeed found that adults show stronger holistic processing when looking at adult faces compared to child faces, but whether a similar own-age bias exists in children remains to be shown.

Methodology/Principal Findings

Here we used the composite face task – a standard test of holistic face processing – to investigate if, for child faces, holistic processing is stronger for children than adults. Results showed child participants (8–13 years) had a larger composite effect than adult participants (22–65 years).

Conclusions/Significance

Our finding suggests that differences in strength of holistic processing may underlie the own-age bias on recognition memory. We discuss the origin of own-age biases in terms of relative experience, face-space tuning, and social categorization.  相似文献   

15.

Background

Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle.

Methodology/Principal Findings

We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles.

Conclusions

Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods.  相似文献   

16.

Background

The simultaneous tracking and identification of multiple moving objects encountered in everyday life requires one to correctly bind identities to objects. In the present study, we investigated the role of spatial configuration made by multiple targets when observers are asked to track multiple moving objects with distinct identities.

Methodology/Principal Findings

The overall spatial configuration made by the targets was manipulated: In the constant condition, the configuration remained as a virtual convex polygon throughout the tracking, and in the collapsed condition, one of the moving targets (critical target) crossed over an edge of the virtual polygon during tracking, destroying it. Identification performance was higher when the configuration remained intact than when it collapsed (Experiments 1a, 1b, and 2). Moreover, destroying the configuration affected the allocation of dynamic attention: the critical target captured more attention than did the other targets. However, observers were worse at identifying the critical target and were more likely to confuse it with the targets that formed the virtual crossed edge (Experiments 3–5). Experiment 6 further showed that the visual system constructs an overall configuration only by using the targets (and not the distractors); identification performance was not affected by whether the distractor violated the spatial configuration.

Conclusions/Significance

In sum, these results suggest that the visual system may integrate targets (but not distractors) into a spatial configuration during multiple identity tracking, which affects the distribution of dynamic attention and the updating of identity-location binding.  相似文献   

17.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

18.

Background

Major depressive disorder (MDD) is associated with a mood-congruent processing bias in the amygdala toward face stimuli portraying sad expressions that is evident even when such stimuli are presented below the level of conscious awareness. The extended functional anatomical network that maintains this response bias has not been established, however.

Aims

To identify neural network differences in the hemodynamic response to implicitly presented facial expressions between depressed and healthy control participants.

Method

Unmedicated-depressed participants with MDD (n = 22) and healthy controls (HC; n = 25) underwent functional MRI as they viewed face stimuli showing sad, happy or neutral face expressions, presented using a backward masking design. The blood-oxygen-level dependent (BOLD) signal was measured to identify regions where the hemodynamic response to the emotionally valenced stimuli differed between groups.

Results

The MDD subjects showed greater BOLD responses than the controls to masked-sad versus masked-happy faces in the hippocampus, amygdala and anterior inferotemporal cortex. While viewing both masked-sad and masked-happy faces relative to masked-neutral faces, the depressed subjects showed greater hemodynamic responses than the controls in a network that included the medial and orbital prefrontal cortices and anterior temporal cortex.

Conclusions

Depressed and healthy participants showed distinct hemodynamic responses to masked-sad and masked-happy faces in neural circuits known to support the processing of emotionally valenced stimuli and to integrate the sensory and visceromotor aspects of emotional behavior. Altered function within these networks in MDD may establish and maintain illness-associated differences in the salience of sensory/social stimuli, such that attention is biased toward negative and away from positive stimuli.  相似文献   

19.

Background

Understanding social interactions requires the ability to accurately interpret conspecifics'' actions, sometimes only on the basis of subtle body language analysis. Here we address an important issue that has not yet received much attention in social neuroscience, that of an interaction between two agents. We attempted to isolate brain responses to two individuals interacting compared to two individuals acting independently.

Methodology/Principal Findings

We used minimalistic point-light displays to depict the characters, as they provide the most straightforward way to isolate mechanisms used to extract information from motion per se without any interference with other visual information. Functional magnetic resonance imaging (fMRI) method was used to determine which brain regions were recruited during the observation of two interacting agents, mimicking everyday social scenes. While the mirror and mentalizing networks are rarely concurrently active, we found that both of them might be needed to catch the social intentions carried by whole-body motion.

Conclusions/Significance

These findings shed light on how motor cognition contributes to social cognition when social information is embedded in whole-body motion only. Finally, the approach described here provides a valuable and original tool for investigating the brain networks responsible for social understanding, in particular in psychiatric disorders.  相似文献   

20.

Background

The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words.

Methodology

Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters.

Principal Findings

We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities.

Conclusions

The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号