首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

2.

Background

The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset.

Methodology/Principal Findings

Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event.

Conclusions/Significance

This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event.  相似文献   

3.
Ganel T  Chajut E  Algom D 《Current biology : CB》2008,18(14):R599-R601
According to Weber's law, a basic perceptual principle of psychological science, sensitivity to changes along a given physical dimension decreases when stimulus intensity increases [1]. In other words, the ‘just noticeable difference’ (JND) for weaker stimuli is smaller — hence resolution power is greater — than that for stronger stimuli on the same sensory continuum. Although Weber's law characterizes human perception for virtually all sensory dimensions, including visual length [2] and [3], there have been no attempts to test its validity for visually guided action. For this purpose, we asked participants to either grasp or make perceptual size estimations for real objects varying in length. A striking dissociation was found between grasping and perceptual estimations: in the perceptual conditions, JND increased with physical size in accord with Weber's law; but in the grasping condition, JND was unaffected by the same variation in size of the referent objects. Therefore, Weber's law was violated for visually guided action, but not for perceptual estimations. These findings document a fundamental difference in the way that object size is computed for action and for perception and suggest that the visual coding for action is based on absolute metrics even at a very basic level of processing.  相似文献   

4.

Background

Visual neglect is an attentional deficit typically resulting from parietal cortex lesion and sometimes frontal lesion. Patients fail to attend to objects and events in the visual hemifield contralateral to their lesion during visual search.

Methodology/Principal Finding

The aim of this work was to examine the effects of parietal and frontal lesion in an existing computational model of visual attention and search and simulate visual search behaviour under lesion conditions. We find that unilateral parietal lesion in this model leads to symptoms of visual neglect in simulated search scan paths, including an inhibition of return (IOR) deficit, while frontal lesion leads to milder neglect and to more severe deficits in IOR and perseveration in the scan path. During simulations of search under unilateral parietal lesion, the model''s extrastriate ventral stream area exhibits lower activity for stimuli in the neglected hemifield compared to that for stimuli in the normally perceived hemifield. This could represent a computational correlate of differences observed in neuroimaging for unconscious versus conscious perception following parietal lesion.

Conclusions/Significance

Our results lead to the prediction, supported by effective connectivity evidence, that connections between the dorsal and ventral visual streams may be an important factor in the explanation of perceptual deficits in parietal lesion patients and of conscious perception in general.  相似文献   

5.

Background

Most of us are poor at faking actions. Kinematic studies have shown that when pretending to pick up imagined objects (pantomimed actions), we move and shape our hands quite differently from when grasping real ones. These differences between real and pantomimed actions have been linked to separate brain pathways specialized for different kinds of visuomotor guidance. Yet professional magicians regularly use pantomimed actions to deceive audiences.

Methodology and Principal Findings

In this study, we tested whether, despite their skill, magicians might still show kinematic differences between grasping actions made toward real versus imagined objects. We found that their pantomimed actions in fact closely resembled real grasps when the object was visible (but displaced) (Experiment 1), but failed to do so when the object was absent (Experiment 2).

Conclusions and Significance

We suggest that although the occipito-parietal visuomotor system in the dorsal stream is designed to guide goal-directed actions, prolonged practice may enable it to calibrate actions based on visual inputs displaced from the action.  相似文献   

6.
Saygin AP  Cook J  Blakemore SJ 《PloS one》2010,5(10):e13491

Background

Perception of biological motion is linked to the action perception system in the human brain, abnormalities within which have been suggested to underlie impairments in social domains observed in autism spectrum conditions (ASC). However, the literature on biological motion perception in ASC is heterogeneous and it is unclear whether deficits are specific to biological motion, or might generalize to form-from-motion perception.

Methodology and Principal Findings

We compared psychophysical thresholds for both biological and non-biological form-from-motion perception in adults with ASC and controls. Participants viewed point-light displays depicting a walking person (Biological Motion), a translating rectangle (Structured Object) or a translating unfamiliar shape (Unstructured Object). The figures were embedded in noise dots that moved similarly and the task was to determine direction of movement. The number of noise dots varied on each trial and perceptual thresholds were estimated adaptively. We found no evidence for an impairment in biological or non-biological object motion perception in individuals with ASC. Perceptual thresholds in the three conditions were almost identical between the ASC and control groups.

Discussion and Conclusions

Impairments in biological motion and non-biological form-from-motion perception are not across the board in ASC, and are only found for some stimuli and tasks. We discuss our results in relation to other findings in the literature, the heterogeneity of which likely relates to the different tasks performed. It appears that individuals with ASC are unaffected in perceptual processing of form-from-motion, but may exhibit impairments in higher order judgments such as emotion processing. It is important to identify more specifically which processes of motion perception are impacted in ASC before a link can be made between perceptual deficits and the higher-level features of the disorder.  相似文献   

7.

Background

Our expectations of an object''s heaviness not only drive our fingertip forces, but also our perception of heaviness. This effect is highlighted by the classic size-weight illusion (SWI), where different-sized objects of identical mass feel different weights. Here, we examined whether these expectations are sufficient to induce the SWI in a single wooden cube when lifted without visual feedback, by varying the size of the object seen prior to the lift.

Methodology/Principal Findings

Participants, who believed that they were lifting the same object that they had just seen, reported that the weight of the single, standard-sized cube that they lifted on every trial varied as a function of the size of object they had just seen. Seeing the small object before the lift made the cube feel heavier than it did after seeing the large object. These expectations also affected the fingertip forces that were used to lift the object when vision was not permitted. The expectation-driven errors made in early trials were not corrected with repeated lifting, and participants failed to adapt their grip and load forces from the expected weight to the object''s actual mass in the same way that they could when lifting with vision.

Conclusions/Significance

Vision appears to be crucial for the detection, and subsequent correction, of the ostensibly non-visual grip and load force errors that are a common feature of this type of object interaction. Expectations of heaviness are not only powerful enough to alter the perception of a single object''s weight, but also continually drive the forces we use to lift the object when vision is unavailable.  相似文献   

8.

Background

It has been reported that participants judge an object to be closer after a stick has been used to touch it than after touching it with the hand. In this study we try to find out why this is so.

Methodology

We showed six participants a cylindrical object on a table. On separate trials (randomly intermixed) participants either estimated verbally how far the object is from their body or they touched a remembered location. Touching was done either with the hand or with a stick (in separate blocks). In three different sessions, participants touched either the object location or the location halfway to the object location. Verbal judgments were given either in centimeters or in terms of whether the object would be reachable with the hand. No differences in verbal distance judgments or touching responses were found between the blocks in which the stick or the hand was used.

Conclusion

Instead of finding out why the judged distance changes when using a tool, we found that using a stick does not necessarily alter judged distances or judgments about the reachability of objects.  相似文献   

9.

Background

Substantial literature has demonstrated that how the hand approaches an object depends on the manipulative action that will follow object contact. Little is known about how the placement of individual fingers on objects is affected by the end-goal of the action.

Methodology/Principal Findings

Hand movement kinematics were measured during reaching for and grasping movements towards two objects (stimuli): a bottle with an ordinary cylindrical shape and a bottle with a concave constriction. The effects of the stimuli''s weight (half full or completely full of water) and the end-goals (pouring, moving) of the action were also assessed. Analysis of key kinematic landmarks measured during reaching movements indicate that object affordance facilitates the end-goal of the action regardless of accuracy constraints. Furthermore, the placement of individual digits at contact is modulated by the shape of the object and the end-goal of the action.

Conclusions/Significance

These findings offer a substantial contribution to the current debate about the role played by affordances and end-goals in determining the structure of reach-to-grasp movements.  相似文献   

10.

Background

Grasping at birth is well-known as a reflex in response to a stimulation of the palm of the hand. Recent studies revealed that this grasping was not only a pure reflex because human newborns are able to detect and to remember differences in shape features. The manual perception of shapes has not been investigated in preterm human infants. The aim of the present study was to investigate manual perception by preterm infants.

Methodology/Principal Findings

We used a habituation/reaction to novelty procedure in twenty-four human preterm infants from 33 to 34+6 post-conceptional age. After habituation to an object (prism or cylinder) in one hand (left or right) in a habituation phase, babies were given either the same object or the other (novel) object in the same hand in a test phase. We observed that after successive presentations of the same object, a decrease of the holding time is observed for each preterm infant. Moreover, a significant increase of the holding time is obtained with the presentation of the novel object. Finally, the comparison between the current performance of preterm infants and those of full-term newborns showed that preterm babies only had a faster tactile habituation to a shape.

Conclusion/Significance

For the first time, the results reveal that preterm infants from 33 to 34+6 GW can detect the specific features that differentiate prism and cylinder shapes by touch, and remember them. The results suggest that there is no qualitative, but only quantitative, difference between the perceptual abilities of preterm babies and those of full-term babies in perceiving shape manually.  相似文献   

11.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

12.

Background

Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues.

Methodology/Principal Findings

Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery.

Conclusions/Significance

Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.  相似文献   

13.

Background

During locomotion, vision is used to perceive environmental obstacles that could potentially threaten stability; locomotor action is then modified to avoid these obstacles. Various factors such as lighting and texture can make these environmental obstacles appear larger or smaller than their actual size. It is unclear if gait is adapted based on the actual or perceived height of these environmental obstacles. The purposes of this study were to determine if visually guided action is scaled to visual perception, and to determine if task experience influenced how action is scaled to perception.

Methodology/Principal Findings

Participants judged the height of two obstacles before and after stepping over each of them 50 times. An illusion made obstacle one appear larger than obstacle two, even though they were identical in size. The influence of task experience was examined by comparing the perception-action relationship during the first five obstacle crossings (1–5) with the last five obstacle crossings (46–50). In the first set of trials, obstacle one was perceived to be 2.0 cm larger than obstacle two and subjects stepped 2.7 cm higher over obstacle one. After walking over the obstacle 50 times, the toe elevation was not different between obstacles, but obstacle one was still perceived as 2.4 cm larger.

Conclusions/Significance

There was evidence of locomotor adaptation, but no evidence of perceptual adaptation with experience. These findings add to research that demonstrates that while the motor system can be influenced by perception, it can also operate independent of perception.  相似文献   

14.
Gao Z  Li J  Yin J  Shen M 《PloS one》2010,5(12):e14273

Background

The processing mechanisms of visual working memory (VWM) have been extensively explored in the recent decade. However, how the perceptual information is extracted into VWM remains largely unclear. The current study investigated this issue by testing whether the perceptual information was extracted into VWM via an integrated-object manner so that all the irrelevant information would be extracted (object hypothesis), or via a feature-based manner so that only the target-relevant information would be extracted (feature hypothesis), or via an analogous processing manner as that in visual perception (analogy hypothesis).

Methodology/Principal Findings

High-discriminable information which is processed at the parallel stage of visual perception and fine-grained information which is processed via focal attention were selected as the representatives of perceptual information. The analogy hypothesis predicted that whereas high-discriminable information is extracted into VWM automatically, fine-grained information will be extracted only if it is task-relevant. By manipulating the information type of the irrelevant dimension in a change-detection task, we found that the performance was affected and the ERP component N270 was enhanced if a change between the probe and the memorized stimulus consisted of irrelevant high-discriminable information, but not if it consisted of irrelevant fine-grained information.

Conclusions/Significance

We conclude that dissociated extraction mechanisms exist in VWM for information resolved via dissociated processes in visual perception (at least for the information tested in the current study), supporting the analogy hypothesis.  相似文献   

15.

Background

Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat.

Methodology/Principal Findings

Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds.

Conclusions/Significance

Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.  相似文献   

16.

Background

In the human visual system, different attributes of an object, such as shape, color, and motion, are processed separately in different areas of the brain. This raises a fundamental question of how are these attributes integrated to produce a unified perception and a specific response. This “binding problem” is computationally difficult because all attributes are assumed to be bound together to form a single object representation. However, there is no firm evidence to confirm that such representations exist for general objects.

Methodology/Principal Findings

Here we propose a paired-attribute model in which cognitive processes are based on multiple representations of paired attributes. In line with the model''s prediction, we found that multiattribute stimuli can produce an illusory perception of a multiattribute object arising from erroneous integration of attribute pairs, implying that object recognition is based on parallel perception of paired attributes. Moreover, in a change-detection task, a feature change in a single attribute frequently caused an illusory perception of change in another attribute, suggesting that multiple pairs of attributes are stored in memory.

Conclusions/Significance

The paired-attribute model can account for some novel illusions and controversial findings on binocular rivalry and short-term memory. Our results suggest that many cognitive processes are performed at the level of paired attributes rather than integrated objects, which greatly facilitates the binding problem and provides simpler solutions for it.  相似文献   

17.
Ho C  Cheung SH 《PloS one》2011,6(12):e28814

Background

Human object recognition degrades sharply as the target object moves from central vision into peripheral vision. In particular, one''s ability to recognize a peripheral target is severely impaired by the presence of flanking objects, a phenomenon known as visual crowding. Recent studies on how visual awareness of flanker existence influences crowding had shown mixed results. More importantly, it is not known whether conscious awareness of the existence of both the target and flankers are necessary for crowding to occur.

Methodology/Principal Findings

Here we show that crowding persists even when people are completely unaware of the flankers, which are rendered invisible through the continuous flash suppression technique. Contrast threshold for identifying the orientation of a grating pattern was elevated in the flanked condition, even when the subjects reported that they were unaware of the perceptually suppressed flankers. Moreover, we find that orientation-specific adaptation is attenuated by flankers even when both the target and flankers are invisible.

Conclusions

These findings complement the suggested correlation between crowding and visual awareness. What''s more, our results demonstrate that conscious awareness and attention are not prerequisite for crowding.  相似文献   

18.

Background

The mechanisms of drug-induced visions are poorly understood. Very few serotonergic hallucinogens have been studied in humans in decades, despite widespread use of these drugs and potential relevance of their mechanisms to hallucinations occurring in psychiatric and neurological disorders.

Methodology/Principal Findings

We investigated the mechanisms of hallucinogen-induced visions by measuring the visual and perceptual effects of the hallucinogenic serotonin 5-HT2AR receptor agonist and monoamine releaser, 3,4-methylenedioxyamphetamine (MDA), in a double-blind placebo-controlled study. We found that MDA increased self-report measures of mystical-type experience and other hallucinogen-like effects, including reported visual alterations. MDA produced a significant increase in closed-eye visions (CEVs), with considerable individual variation. Magnitude of CEVs after MDA was associated with lower performance on measures of contour integration and object recognition.

Conclusions/Significance

Drug-induced visions may have greater intensity in people with poor sensory or perceptual processing, suggesting common mechanisms with other hallucinatory syndromes. MDA is a potential tool to investigate mystical experiences and visual perception.

Trial Registration

Clinicaltrials.gov NCT00823407  相似文献   

19.

Background

Social dominance and physical size are closely linked. Nonverbal dominance displays in many non-human species are known to increase the displayer''s apparent size. Humans also employ a variety of nonverbal cues that increase apparent status, but it is not yet known whether these cues function via a similar mechanism: by increasing the displayer''s apparent size.

Methodology/Principal Finding

We generated stimuli in which actors displayed high status, neutral, or low status cues that were drawn from the findings of a recent meta-analysis. We then conducted four studies that indicated that nonverbal cues that increase apparent status do so by increasing the perceived size of the displayer. Experiment 1 demonstrated that nonverbal status cues affect perceivers'' judgments of physical size. The results of Experiment 2 showed that altering simple perceptual cues can affect judgments of both size and perceived status. Experiment 3 used objective measurements to demonstrate that status cues change targets'' apparent size in the two-dimensional plane visible to a perceiver, and Experiment 4 showed that changes in perceived size mediate changes in perceived status, and that the cue most associated with this phenomenon is postural openness.

Conclusions/Significance

We conclude that nonverbal cues associated with social dominance also affect the perceived size of the displayer. This suggests that certain nonverbal dominance cues in humans may function as they do in other species: by creating the appearance of changes in physical size.  相似文献   

20.

Background

Vision provides the most salient information with regard to stimulus motion, but audition can also provide important cues that affect visual motion perception. Here, we show that sounds containing no motion or positional cues can induce illusory visual motion perception for static visual objects.

Methodology/Principal Findings

Two circles placed side by side were presented in alternation producing apparent motion perception and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. When the flash onset was synchronized to tones of alternating frequencies, a circle blinking at a fixed location was perceived as lateral motion in the same direction as the previously exposed apparent motion. Furthermore, the effect lasted at least for a few days. The effect was well observed at the retinal position that was previously exposed to apparent motion with tone bursts.

Conclusions/Significance

The present results indicate that strong association between sound sequence and visual motion is easily formed within a short period and that, after forming the association, sounds are able to trigger visual motion perception for a static visual object.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号