共查询到20条相似文献,搜索用时 554 毫秒
1.
Background
Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits.Methodology/Findings
In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task.Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion.Conclusion
Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes. 相似文献2.
Background
Humans are able to track multiple simultaneously moving objects. A number of factors have been identified that can influence the ease with which objects can be attended and tracked. Here, we explored the possibility that object tracking abilities may be specialized for tracking biological targets such as people.Methodology/Principal Findings
We used the Multiple Object Tracking (MOT) paradigm to explore whether the high-level biological status of the targets affects the efficiency of attentional selection and tracking. In Experiment 1, we assessed the tracking of point-light biological motion figures. As controls, we used either the same stimuli or point-light letters, presented in upright, inverted or scrambled configurations. While scrambling significantly affected performance for both letters and point-light figures, there was an effect of inversion restricted to biological motion, inverted figures being harder to track. In Experiment 2, we found that tracking performance was equivalent for natural point-light walkers and ‘moon-walkers’, whose implied direction was incongruent with their actual direction of motion. In Experiment 3, we found higher tracking accuracy for inverted faces compared with upright faces. Thus, there was a double dissociation between inversion effects for biological motion and faces, with no inversion effect for our non-biological stimuli (letters, houses).Conclusions/Significance
MOT is sensitive to some, but not all naturalistic aspects of biological stimuli. There does not appear to be a highly specialized role for tracking people. However, MOT appears constrained by principles of object segmentation and grouping, where effectively grouped, coherent objects, but not necessarily biological objects, are tracked most successfully. 相似文献3.
Background
Human resolution for object size is typically determined by psychophysical methods that are based on conscious perception. In contrast, grasping of the same objects might be less conscious. It is suggested that grasping is mediated by mechanisms other than those mediating conscious perception. In this study, we compared the visual resolution for object size of the visuomotor and the perceptual system.Methodology/Principal Findings
In Experiment 1, participants discriminated the size of pairs of objects once through perceptual judgments and once by grasping movements toward the objects. Notably, the actual size differences were set below the Just Noticeable Difference (JND). We found that grasping trajectories reflected the actual size differences between the objects regardless of the JND. This pattern was observed even in trials in which the perceptual judgments were erroneous. The results of an additional control experiment showed that these findings were not confounded by task demands. Participants were not aware, therefore, that their size discrimination via grasp was veridical.Conclusions/Significance
We conclude that human resolution is not fully tapped by perceptually determined thresholds. Grasping likely exhibits greater resolving power than people usually realize. 相似文献4.
Background
Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.Methodology/Principle Findings
Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.Conclusions/Significance
This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level. 相似文献5.
Background
Certain facial configurations are believed to be associated with distinct affective meanings (i.e. basic facial expressions), and such associations are common across cultures (i.e. universality of facial expressions). However, recently, many studies suggest that various types of contextual information, rather than facial configuration itself, are important factor for facial emotion perception.Methodology/Principal Findings
To examine systematically how contextual information influences individuals’ facial emotion perception, the present study estimated direct observers’ perceptual thresholds for detecting negative facial expressions via a forced-choice psychophysical procedure using faces embedded in various emotional contexts. We additionally measured the individual differences in affective information-processing tendency (BIS/BAS) as a possible factor that may determine the extent to which contextual information on facial emotion perception is used. It was found that contextual information influenced observers'' perceptual thresholds for facial emotion. Importantly, individuals’ affective-information tendencies modulated the extent to which they incorporated context information into their facial emotion perceptions.Conclusions/Significance
The findings of this study suggest that facial emotion perception not only depends on facial configuration, but the context in which the face appears as well. This contextual influence appeared differently with individual’s characteristics of information processing. In summary, we conclude that individual character traits, as well as facial configuration and the context in which a face appears, need to be taken into consideration regarding facial emotional perception. 相似文献6.
Background
The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset.Methodology/Principal Findings
Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event.Conclusions/Significance
This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event. 相似文献7.
The second-agent effect: communicative gestures increase the likelihood of perceiving a second agent
Background
Beyond providing cues about an agent''s intention, communicative actions convey information about the presence of a second agent towards whom the action is directed (second-agent information). In two psychophysical studies we investigated whether the perceptual system makes use of this information to infer the presence of a second agent when dealing with impoverished and/or noisy sensory input.Methodology/Principal Findings
Participants observed point-light displays of two agents (A and B) performing separate actions. In the Communicative condition, agent B''s action was performed in response to a communicative gesture by agent A. In the Individual condition, agent A''s communicative action was replaced with a non-communicative action. Participants performed a simultaneous masking yes-no task, in which they were asked to detect the presence of agent B. In Experiment 1, we investigated whether criterion c was lowered in the Communicative condition compared to the Individual condition, thus reflecting a variation in perceptual expectations. In Experiment 2, we manipulated the congruence between A''s communicative gesture and B''s response, to ascertain whether the lowering of c in the Communicative condition reflected a truly perceptual effect. Results demonstrate that information extracted from communicative gestures influences the concurrent processing of biological motion by prompting perception of a second agent (second-agent effect).Conclusions/Significance
We propose that this finding is best explained within a Bayesian framework, which gives a powerful rationale for the pervasive role of prior expectations in visual perception. 相似文献8.
Background
Vision provides the most salient information with regard to stimulus motion, but audition can also provide important cues that affect visual motion perception. Here, we show that sounds containing no motion or positional cues can induce illusory visual motion perception for static visual objects.Methodology/Principal Findings
Two circles placed side by side were presented in alternation producing apparent motion perception and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. When the flash onset was synchronized to tones of alternating frequencies, a circle blinking at a fixed location was perceived as lateral motion in the same direction as the previously exposed apparent motion. Furthermore, the effect lasted at least for a few days. The effect was well observed at the retinal position that was previously exposed to apparent motion with tone bursts.Conclusions/Significance
The present results indicate that strong association between sound sequence and visual motion is easily formed within a short period and that, after forming the association, sounds are able to trigger visual motion perception for a static visual object. 相似文献9.
Background
Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.Methodology/Principal Findings
Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.Conclusions/Significance
These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing. 相似文献10.
Jing Shang Su Lui Yajing Meng Hongru Zhu Changjian Qiu Qiyong Gong Wei Liao Wei Zhang 《PloS one》2014,9(5)
Background
Several task-based functional MRI (fMRI) studies have highlighted abnormal activation in specific regions involving the low-level perceptual (auditory, visual, and somato-motor) network in posttraumatic stress disorder (PTSD) patients. However, little is known about whether the functional connectivity of the low-level perceptual and higher-order cognitive (attention, central-execution, and default-mode) networks change in medication-naïve PTSD patients during the resting state.Methods
We investigated the resting state networks (RSNs) using independent component analysis (ICA) in 18 chronic Wenchuan earthquake-related PTSD patients versus 20 healthy survivors (HSs).Results
Compared to the HSs, PTSD patients displayed both increased and decreased functional connectivity within the salience network (SN), central executive network (CEN), default mode network (DMN), somato-motor network (SMN), auditory network (AN), and visual network (VN). Furthermore, strengthened connectivity involving the inferior temporal gyrus (ITG) and supplementary motor area (SMA) was negatively correlated with clinical severity in PTSD patients.Limitations
Given the absence of a healthy control group that never experienced the earthquake, our results cannot be used to compare alterations between the PTSD patients, physically healthy trauma survivors, and healthy controls. In addition, the breathing and heart rates were not monitored in our small sample size of subjects. In future studies, specific task paradigms should be used to reveal perceptual impairments.Conclusions
These findings suggest that PTSD patients have widespread deficits in both the low-level perceptual and higher-order cognitive networks. Decreased connectivity within the low-level perceptual networks was related to clinical symptoms, which may be associated with traumatic reminders causing attentional bias to negative emotion in response to threatening stimuli and resulting in emotional dysregulation. 相似文献11.
Background
Contrast discrimination for an image is usually harder if another image is superimposed on top. We asked whether such contrast masking may be enhanced or relieved depending on cues promoting integration of both images as a single pattern, versus segmentation into two independent components.Methodology & Principal Findings
Contrast discrimination thresholds for a foveal test grating were sharply elevated in the presence of a perfectly overlapping orthogonally-oriented mask grating. However thresholds returned to the unmasked baseline when a surround grating was added, having the same orientation and phase of either the test or mask grating. Both such masking and ‘unmasking’ effects were much stronger for moving than static stimuli.Conclusions & Significance
Our results suggest that common-fate motion reinforces the perception of a single coherent plaid pattern, while the surround helps to identify each component independently, thus peeling the plaid apart again. These results challenge current models of early vision, suggesting that higher-level surface organization influences contrast encoding, determining whether the contrast of a grating may be recovered independently from that of its mask. 相似文献12.
Background
The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion.Methodology/Principal Findings
We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else''s body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms.Conclusions/Significance
The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”. 相似文献13.
Background
The processing mechanisms of visual working memory (VWM) have been extensively explored in the recent decade. However, how the perceptual information is extracted into VWM remains largely unclear. The current study investigated this issue by testing whether the perceptual information was extracted into VWM via an integrated-object manner so that all the irrelevant information would be extracted (object hypothesis), or via a feature-based manner so that only the target-relevant information would be extracted (feature hypothesis), or via an analogous processing manner as that in visual perception (analogy hypothesis).Methodology/Principal Findings
High-discriminable information which is processed at the parallel stage of visual perception and fine-grained information which is processed via focal attention were selected as the representatives of perceptual information. The analogy hypothesis predicted that whereas high-discriminable information is extracted into VWM automatically, fine-grained information will be extracted only if it is task-relevant. By manipulating the information type of the irrelevant dimension in a change-detection task, we found that the performance was affected and the ERP component N270 was enhanced if a change between the probe and the memorized stimulus consisted of irrelevant high-discriminable information, but not if it consisted of irrelevant fine-grained information.Conclusions/Significance
We conclude that dissociated extraction mechanisms exist in VWM for information resolved via dissociated processes in visual perception (at least for the information tested in the current study), supporting the analogy hypothesis. 相似文献14.
Atsushi Noritake Bob Uttl Masahiko Terao Masayoshi Nagai Junji Watanabe Akihiro Yagi 《PloS one》2009,4(7)
Background
Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle.Methodology/Principal Findings
We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles.Conclusions
Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods. 相似文献15.
Background
There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour.Objective
We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk.Design/Methods
Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%.Results
Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a ‘risk premium’ of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability.Conclusions
This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson''s disease and schizophrenia. 相似文献16.
Background
The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.Methodology/Principal Findings
In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.Conclusions/Significance
The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws. 相似文献17.
Background
Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.Methodology
Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).Principal Findings
Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.Conclusion
This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space. 相似文献18.
Natália Bezerra Mota Quental Sonia Maria Dozzi Brucki Orlando Francisco Amodeo Bueno 《PloS one》2013,8(7)
Alzheimer’s disease (AD) is the most frequent cause of dementia. The clinical symptoms of AD begin with impairment of memory and executive function followed by the gradual involvement of other functions, such as language, semantic knowledge, abstract thinking, attention, and visuospatial abilities. Visuospatial function involves the identification of a stimulus and its location and can be impaired at the beginning of AD. The Visual Object and Space Perception (VOSP) battery evaluates visuospatial function, while minimizing the interference of other cognitive functions.
Objectives
To evaluate visuospatial function in early AD patients using the VOSP and determine cutoff scores to differentiate between cognitively healthy individuals and AD patients.Methods
Thirty-one patients with mild AD and forty-four healthy elderly were evaluated using a neuropsychological battery and the VOSP.Results
In the VOSP, the AD patients performed more poorly in all subtests examining object perception and in two subtests examining space perception (Number Location and Cube Analysis). The VOSP showed good accuracy and good correlation with tests measuring visuospatial function.Conclusion
Visuospatial function is impaired in the early stages of AD. The VOSP battery is a sensitive battery test for visuospatial deficits with minimal interference by other cognitive functions. 相似文献19.
Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures
Thierry Chaminade Massimiliano Zecca Sarah-Jayne Blakemore Atsuo Takanishi Chris D. Frith Silvestro Micera Paolo Dario Giacomo Rizzolatti Vittorio Gallese Maria Alessandra Umiltà 《PloS one》2010,5(7)
Background
The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.Methodology
Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.Principal Findings
Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca''s area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.Conclusions
Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.Significance
Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. 相似文献20.