首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Current accounts of spatial cognition and human-object interaction suggest that the representation of peripersonal space depends on an action-specific system that remaps its representation according to action requirements. Here we demonstrate that this mechanism is sensitive to knowledge about properties of objects. In two experiments we explored the interaction between physical distance and object attributes (functionality, desirability, graspability, etc.) through a reaching estimation task in which participants indicated if objects were near enough to be reached. Using both a real and a cutting-edge digital scenario, we demonstrate that perceived reaching distance is influenced by ease of grasp and the affective valence of an object. Objects with a positive affective valence tend to be perceived reachable at locations at which neutral or negative objects are perceived as non-reachable. In addition to this, reaction time to distant (non-reachable) positive objects suggests a bias to perceive positive objects as closer than negative and neutral objects (exp. 2). These results highlight the importance of the affective valence of objects in the action-specific mapping of the peripersonal/extrapersonal space system.  相似文献   

2.
The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments.  相似文献   

3.
When watching an actor manipulate objects, observers, like the actor, naturally direct their gaze to each object as the hand approaches and typically maintain gaze on the object until the hand departs. Here, we probed the function of observers'' eye movements, focusing on two possibilities: (i) that observers'' gaze behaviour arises from processes involved in the prediction of the target object of the actor''s reaching movement and (ii) that this gaze behaviour supports the evaluation of mechanical events that arise from interactions between the actor''s hand and objects. Observers watched an actor reach for and lift one of two presented objects. The observers'' task was either to predict the target object or judge its weight. Proactive gaze behaviour, similar to that seen in self-guided action–observation, was seen in the weight judgement task, which requires evaluating mechanical events associated with lifting, but not in the target prediction task. We submit that an important function of gaze behaviour in self-guided action observation is the evaluation of mechanical events associated with interactions between the hand and object. By comparing predicted and actual mechanical events, observers, like actors, can gain knowledge about the world, including information about objects they may subsequently act upon.  相似文献   

4.
Human beings have a strong tendency to imitate. Evidence from motor priming paradigms suggests that people automatically tend to imitate observed actions such as hand gestures by performing mirror-congruent movements (e.g., lifting one’s right finger upon observing a left finger movement; from a mirror perspective). Many observed actions however, do not require mirror-congruent responses but afford complementary (fitting) responses instead (e.g., handing over a cup; shaking hands). Crucially, whereas mirror-congruent responses don''t require physical interaction with another person, complementary actions often do. Given that most experiments studying motor priming have used stimuli devoid of contextual information, this space or interaction-dependency of complementary responses has not yet been assessed. To address this issue, we let participants perform a task in which they had to mirror or complement a hand gesture (fist or open hand) performed by an actor depicted either within or outside of reach. In three studies, we observed faster reaction times and less response errors for complementary relative to mirrored hand movements in response to open hand gestures (i.e., ‘hand-shaking’) irrespective of the perceived interpersonal distance of the actor. This complementary effect could not be accounted for by a low-level spatial cueing effect. These results demonstrate that humans have a strong and automatic tendency to respond by performing complementary actions. In addition, our findings underline the limitations of manipulations of space in modulating effects of motor priming and the perception of affordances.  相似文献   

5.
A classical question in philosophy and psychology is if the sense of one's body influences how one visually perceives the world. Several theoreticians have suggested that our own body serves as a fundamental reference in visual perception of sizes and distances, although compelling experimental evidence for this hypothesis is lacking. In contrast, modern textbooks typically explain the perception of object size and distance by the combination of information from different visual cues. Here, we describe full body illusions in which subjects experience the ownership of a doll's body (80 cm or 30 cm) and a giant's body (400 cm) and use these as tools to demonstrate that the size of one's sensed own body directly influences the perception of object size and distance. These effects were quantified in ten separate experiments with complementary verbal, questionnaire, manual, walking, and physiological measures. When participants experienced the tiny body as their own, they perceived objects to be larger and farther away, and when they experienced the large-body illusion, they perceived objects to be smaller and nearer. Importantly, despite identical retinal input, this "body size effect" was greater when the participants experienced a sense of ownership of the artificial bodies compared to a control condition in which ownership was disrupted. These findings are fundamentally important as they suggest a causal relationship between the representations of body space and external space. Thus, our own body size affects how we perceive the world.  相似文献   

6.
One of the major functions of vision is to allow for an efficient and active interaction with the environment. In this study, we investigate the capacity of human observers to extract visual information from observation of their own actions, and those of others, from different viewpoints. Subjects discriminated the size of objects by observing a point-light movie of a hand reaching for an invisible object. We recorded real reach-and-grasp actions in three-dimensional space towards objects of different shape and size, to produce two-dimensional 'point-light display' movies, which were used to measure size discrimination for reach-and-grasp motion sequences, release-and-withdraw sequences and still frames, all in egocentric and allocentric perspectives. Visual size discrimination from action was significantly better in egocentric than in allocentric view, but only for reach-and-grasp motion sequences: release-and-withdraw sequences or still frames derived no advantage from egocentric viewing. The results suggest that the system may have access to an internal model of action that contributes to calibrate visual sense of size for an accurate grasp.  相似文献   

7.
Enabled by the remarkable dexterity of the human hand, specialized haptic exploration is a hallmark of object perception by touch. Haptic exploration normally takes place in a spatial world that is three-dimensional; nevertheless, stimuli of reduced spatial dimensionality are also used to display spatial information. This paper examines the consequences of full (three-dimensional) versus reduced (two-dimensional) spatial dimensionality for object processing by touch, particularly in comparison with vision. We begin with perceptual recognition of common human-made artefacts, then extend our discussion of spatial dimensionality in touch and vision to include faces, drawing from research on haptic recognition of facial identity and emotional expressions. Faces have often been characterized as constituting a specialized input for human perception. We find that contrary to vision, haptic processing of common objects is impaired by reduced spatial dimensionality, whereas haptic face processing is not. We interpret these results in terms of fundamental differences in object perception across the modalities, particularly the special role of manual exploration in extracting a three-dimensional structure.  相似文献   

8.
The spatial character of our reaching movements is extremely sensitive to potential obstacles in the workspace. We recently found that this sensitivity was retained by most patients with left visual neglect when reaching between two objects, despite the fact that they tended to ignore the leftward object when asked to bisect the space between them. This raises the possibility that obstacle avoidance does not require a conscious awareness of the obstacle avoided. We have now tested this hypothesis in a patient with visual extinction following right temporoparietal damage. Extinction is an attentional disorder in which patients fail to report stimuli on the side of space opposite a brain lesion under conditions of bilateral stimulation. Our patient avoided obstacles during reaching, to exactly the same degree, regardless of whether he was able to report their presence. This implicit processing of object location, which may depend on spared superior parietal-lobe pathways, demonstrates that conscious awareness is not necessary for normal obstacle avoidance.  相似文献   

9.
Effective vision for action and effective management of concurrent spatial relations underlie skillful manipulation of objects, including hand tools, in humans. Children’s performance in object insertion tasks (fitting tasks) provides one index of the striking changes in the development of vision for action in early life. Fitting tasks also tap children’s ability to work with more than one feature of an object concurrently. We examine young children’s performance on fitting tasks in two and three dimensions and compare their performance with the previously reported performance of adult individuals of two species of nonhuman primates on similar tasks. Two, three, and four year-old children routinely aligned a bar-shaped stick and a cross-shaped stick but had difficulty aligning a tomahawk-shaped stick to a matching cut-out. Two year-olds were especially challenged by the tomahawk. Three and four year-olds occasionally held the stick several inches above the surface, comparing the stick to the surface visually, while trying to align it. The findings suggest asynchronous development in the ability to use vision to achieve alignment and to work with two and three spatial features concurrently. Using vision to align objects precisely to other objects and managing more than one spatial relation between an object and a surface are already more elaborated in two year-old humans than in other primates. The human advantage in using hand tools derives in part from this fundamental difference in the relation between vision and action between humans and other primates.  相似文献   

10.
MA Plaisier  JB Smeets 《PloS one》2012,7(8):e42518
An object in outer space is weightless due to the absence of gravity, but astronauts can still judge whether one object is heavier than another one by accelerating the object. How heavy an object feels depends on the exploration mode: an object is perceived as heavier when holding it against the pull of gravity than when accelerating it. At the same time, perceiving an object’s size influences the percept: small objects feel heavier than large objects with the same mass (size–weight illusion). Does this effect depend on perception of the pull of gravity? To answer this question, objects were suspended from a long wire and participants were asked to push an object and rate its heaviness. This way the contribution of gravitational forces on the percept was minimised. Our results show that weight is not at all necessary for the illusion because the size–weight illusion occurred without perception of weight. The magnitude of the illusion was independent of whether inertial or gravitational forces were perceived. We conclude that the size–weight illusion does not depend on prior knowledge about weights of object, but instead on a more general knowledge about the mass of objects, independent of the contribution of gravity. Consequently, the size–weight illusion will have the same magnitude on Earth as it should have on the Moon or even under conditions of weightlessness.  相似文献   

11.
Binocular vision is obviously useful for depth perception, but it might also enhance other components of visual processing, such as image segmentation. We used naturalistic images to determine whether giving an object a stereoscopic offset of 15-120 arcmin of crossed disparity relative to its background would make the object easier to recognize in briefly presented (33-133 ms), temporally masked displays. Disparity had a beneficial effect across a wide range of disparities and display durations. Most of this benefit occurred whether or not the stereoscopic contour agreed with the object’s luminance contour. We attribute this benefit to an orienting of spatial attention that selected the object and its local background for enhanced 2D pattern processing. At longer display durations, contour agreement provided an additional benefit, and a separate experiment using random-dot stimuli confirmed that stereoscopic contours plausibly contributed to recognition at the longer display durations in our experiment. We conclude that in real-world situations binocular vision confers an advantage not only for depth perception, but also for recognizing objects from their luminance patterns and bounding contours.  相似文献   

12.
In order to identify basic aspects in the process of tactile perception, we trained rats and humans in similar object localization tasks and compared the strategies used by the two species. We found that rats integrated temporally related sensory inputs ('temporal inputs') from early whisk cycles with spatially related inputs ('spatial inputs') to align their whiskers with the objects; their perceptual reports appeared to be based primarily on this spatial alignment. In a similar manner, human subjects also integrated temporal and spatial inputs, but relied mainly on temporal inputs for object localization. These results suggest that during tactile object localization, an iterative motor-sensory process gradually converges on a stable percept of object location in both species.  相似文献   

13.
The introduction of non-target objects into a workspace leads to temporal and spatial adjustments of reaching trajectories towards a target. If the non-target is obstructing the path of the hand towards the target, the reach is adjusted such that collision with the non-target, or obstacle, is avoided. Little is known about the influence of features which are irrelevant for the execution of the movement on avoidance movements, like color similarity between target and non-target objects. In eye movement studies the similarity of non-targets has been revealed to influence oculomotor competition. Because of the tight neural and behavioral coupling between the gaze and reaching system, our aim was to determine the contribution of similarity between target and non-target to avoidance movements. We performed 2 experiments in which participants had to reach to grasp a target object while a non-target was present in the workspace. These non-targets could be either similar or dissimilar in color to the target. The results indicate that the non-spatial feature similarity can further modify the avoidance response and therefore further modify the spatial path of the reach. Indeed, we find that dissimilar pairs have a stronger effect on reaching-to-grasp movements than similar pairs. This effect was most pronounced when the non-target was on the outside of the reaching hand, where it served as more of an obstacle to the trailing arm. We propose that the increased capture of attention by the dissimilar obstacle is responsible for the more robust avoidance response.  相似文献   

14.
Flexible representations of dynamics are used in object manipulation   总被引:1,自引:0,他引:1  
To manipulate an object skillfully, the brain must learn its dynamics, specifying the mapping between applied force and motion. A fundamental issue in sensorimotor control is whether such dynamics are represented in an extrinsic frame of reference tied to the object or an intrinsic frame of reference linked to the arm. Although previous studies have suggested that objects are represented in arm-centered coordinates [1-6], all of these studies have used objects with unusual and complex dynamics. Thus, it is not known how objects with natural dynamics are represented. Here we show that objects with simple (or familiar) dynamics and those with complex (or unfamiliar) dynamics are represented in object- and arm-centered coordinates, respectively. We also show that objects with simple dynamics are represented with an intermediate coordinate frame when vision of the object is removed. These results indicate that object dynamics can be flexibly represented in different coordinate frames by the brain. We suggest that with experience, the representation of the dynamics of a manipulated object may shift from a coordinate frame tied to the arm toward one that is linked to the object. The additional complexity required to represent dynamics in object-centered coordinates would be economical for familiar objects because such a representation allows object use regardless of the orientation of the object in hand.  相似文献   

15.
Space and time are intimately coupled dimensions in the human brain. Several lines of evidence suggest that space and time are processed by a shared analogue magnitude system. It has been proposed that actions are instrumental in establishing this shared magnitude system. Here we provide evidence in support of this hypothesis, by showing that the interaction between space and time is enhanced when magnitude information is acquired through action. Participants observed increases or decreases in the height of a visual bar (spatial magnitude) while judging whether a simultaneously presented sequence of acoustic tones had accelerated or decelerated (temporal magnitude). In one condition (Action), participants directly controlled the changes in bar height with a hand grip device, whereas in the other (No Action), changes in bar height were externally controlled but matched the spatial/temporal profile of the Action condition. The sign of changes in bar height biased the perceived rate of the tone sequences, where increases in bar height produced apparent increases in tone rate. This effect was amplified when the visual bar was actively controlled in the Action condition, and the strength of the interaction was scaled by the magnitude of the action. Subsequent experiments ruled out that this was simply explained by attentional factors, and additionally showed that a monotonic mapping is also required between grip force and bar height in order to bias the perception of the tones. These data provide support for an instrumental role of action in interfacing spatial and temporal quantities in the brain.  相似文献   

16.
Behavioural studies on normal and brain-damaged individuals provide convincing evidence that the perception of objects results in the generation of both visual and motor signals in the brain, irrespective of whether or not there is an intention to act upon the object. In this paper we sought to determine the basis of the motor signals generated by visual objects. By examining how the properties of an object affect an observer's reaction time for judging its orientation, we provide evidence to indicate that directed visual attention is responsible for the automatic generation of motor signals associated with the spatial characteristics of perceived objects.  相似文献   

17.
Schema design and implementation of the grasp-related mirror neuron system   总被引:6,自引:0,他引:6  
 Mirror neurons within a monkey's premotor area F5 fire not only when the monkey performs a certain class of actions but also when the monkey observes another monkey (or the experimenter) perform a similar action. It has thus been argued that these neurons are crucial for understanding of actions by others. We offer the hand-state hypothesis as a new explanation of the evolution of this capability: the basic functionality of the F5 mirror system is to elaborate the appropriate feedback – what we call the hand state– for opposition-space based control of manual grasping of an object. Given this functionality, the social role of the F5 mirror system in understanding the actions of others may be seen as an exaptation gained by generalizing from one's own hand to an other's hand. In other words, mirror neurons first evolved to augment the “canonical” F5 neurons (active during self-movement based on observation of an object) by providing visual feedback on “hand state,” relating the shape of the hand to the shape of the object. We then introduce the MNS1 (mirror neuron system 1) model of F5 and related brain regions. The existing Fagg–Arbib–Rizzolatti–Sakata model represents circuitry for visually guided grasping of objects, linking the anterior intraparietal area (AIP) with F5 canonical neurons. The MNS1 model extends the AIP visual pathway by also modeling pathways, directed toward F5 mirror neurons, which match arm–hand trajectories to the affordances and location of a potential target object. We present the basic schemas for the MNS1 model, then aggregate them into three “grand schemas”– visual analysis of hand state, reach and grasp, and the core mirror circuit – for each of which we present a useful implementation (a non-neural visual processing system, a multijoint 3-D kinematics simulator, and a learning neural network, respectively). With this implementation we show how the mirror system may learnto recognize actions already in the repertoire of the F5 canonical neurons. We show that the connectivity pattern of mirror neuron circuitry can be established through training, and that the resultant network can exhibit a range of novel, physiologically interesting behaviors during the process of action recognition. We train the system on the basis of final grasp but then observe the whole time course of mirror neuron activity, yielding predictions for neurophysiological experiments under conditions of spatial perturbation, altered kinematics, and ambiguous grasp execution which highlight the importance of the timingof mirror neuron activity. Received: 6 August 2001 / Accepted in revised form: 5 February 2002  相似文献   

18.

Background

The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion.

Methodology/Principal Findings

We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else''s body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms.

Conclusions/Significance

The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”.  相似文献   

19.
Damage to the human parietal cortex leads to disturbances of spatial perception and of motor behaviour. Within the parietal lobe, lesions of the superior and of the inferior lobule induce quite different, characteristic deficits. Patients with inferior (predominantly right) parietal lobe lesions fail to explore the contralesional part of space by eye or limb movements (spatial neglect). In contrast, superior parietal lobe lesions lead to specific impairments of goal-directed movements (optic ataxia). The observations reported in this paper support the view of dissociated functions represented in the inferior and the superior lobule of the human parietal cortex. They suggest that a spatial reference frame for exploratory behaviour is disturbed in patients with neglect. Data from these patients'' visual search argue that their failure to explore the contralesional side is due to a disturbed input transformation leading to a deviation of egocentric space representation to the ipsilesional side. Data further show that this deviation follows a rotation around the earth-vertical body axis to the ipsilesional side rather than a translation towards that side. The results are in clear contrast to explanations that assume a lateral gradient ranging from a minimum of exploration in the extreme contralesional to a maximum in the extreme ipsilesional hemispace. Moreover, the failure to orient towards and to explore the contralesional part of space appears to be distinct from those deficits observed once an object of interest has been located and releases reaching. Although patients with neglect exhibit a severe bias of exploratory movements, their hand trajectories to targets in peripersonal space may follow a straight path. This result suggests that (i) exploratory and (ii) goal-directed behaviour in space do not share the same neural control mechanisms. Neural representation of space in the inferior parietal lobule seems to serve as a matrix for spatial exploration and for orienting in space but not for visuomotor processes involved in reaching for objects. Disturbances of such processes rather appear to be prominent in patients with more superior parietal lobe lesions and optic ataxia.  相似文献   

20.
In this article we review current literature on cross-modal recognition and present new findings from our studies on object and scene recognition. Specifically, we address the questions of what is the nature of the representation underlying each sensory system that facilitates convergence across the senses and how perception is modified by the interaction of the senses. In the first set of our experiments, the recognition of unfamiliar objects within and across the visual and haptic modalities was investigated under conditions of changes in orientation (0 degrees or 180 degrees ). An orientation change increased recognition errors within each modality but this effect was reduced across modalities. Our results suggest that cross-modal object representations of objects are mediated by surface-dependent representations. In a second series of experiments, we investigated how spatial information is integrated across modalities and viewpoint using scenes of familiar, 3D objects as stimuli. We found that scene recognition performance was less efficient when there was either a change in modality, or in orientation, between learning and test. Furthermore, haptic learning was selectively disrupted by a verbal interpolation task. Our findings are discussed with reference to separate spatial encoding of visual and haptic scenes. We conclude by discussing a number of constraints under which cross-modal integration is optimal for object recognition. These constraints include the nature of the task, and the amount of spatial and temporal congruency of information across the modalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号