首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Noninformative vision improves haptic spatial perception   总被引:10,自引:0,他引:10  
Previous studies have attempted to map somatosensory space via haptic matching tasks and have shown that individuals make large and systematic matching errors, the magnitude and angular direction of which vary systematically through the workspace. Based upon such demonstrations, it has been suggested that haptic space is non-Euclidian. This conclusion assumes that spatial perception is modality specific, and it largely ignores the fact that tactile matching tasks involve active, exploratory arm movements. Here we demonstrate that, when individuals match two bar stimuli (i.e., make them parallel) in circumstances favoring extrinsic (visual) coordinates, providing noninformative visual information significantly increases the accuracy of haptic perception. In contrast, when individuals match the same bar stimuli in circumstances favoring the coding of movements in intrinsic (limb-based) coordinates, providing identical noninformative visual information either has no effect or leads to the decreased accuracy of haptic perception. These results are consistent with optimal integration models of sensory integration in which the weighting given to visual and somatosensory signals depends upon the precision of the visual and somatosensory information and provide important evidence for the task-dependent integration of visual and somatosensory signals during the construction of a representation of peripersonal space.  相似文献   

2.
Multisensory integration is a common feature of the mammalian brain that allows it to deal more efficiently with the ambiguity of sensory input by combining complementary signals from several sensory sources. Growing evidence suggests that multisensory interactions can occur as early as primary sensory cortices. Here we present incompatible visual signals (orthogonal gratings) to each eye to create visual competition between monocular inputs in primary visual cortex where binocular combination would normally take place. The incompatibility prevents binocular fusion and triggers an ambiguous perceptual response in which the two images are perceived one at a time in an irregular alternation. One key function of multisensory integration is to minimize perceptual ambiguity by exploiting cross-sensory congruence. We show that a haptic signal matching one of the visual alternatives helps disambiguate visual perception during binocular rivalry by both prolonging the dominance period of the congruent visual stimulus and by shortening its suppression period. Importantly, this interaction is strictly tuned for orientation, with a mismatch as small as 7.5° between visual and haptic orientations sufficient to annul the interaction. These results indicate important conclusions: first, that vision and touch interact at early levels of visual processing where interocular conflicts are first detected and orientation tunings are narrow, and second, that haptic input can influence visual signals outside of visual awareness, bringing a stimulus made invisible by binocular rivalry suppression back to awareness sooner than would occur without congruent haptic input.  相似文献   

3.
Meng X  Zaidi Q 《PloS one》2011,6(5):e19877
Vision generally provides reliable predictions for touch and motor-control, but some classes of stimuli evoke visual illusions. Using haptic feedback on virtual 3-D surfaces, we tested the function of touch in such cases. Our experiments show that in the perception of 3-D shapes from texture cues, haptic information can dominate vision in some cases, changing percepts qualitatively from convex to concave and concave to slant. The effects take time to develop, do not outlive the cessation of the feedback, are attenuated by distance, and drastically reduced by gaps in the surface. These dynamic shifts in qualitative perceived shapes could be invaluable in neural investigations that test whether haptic feedback modifies selective activation of neurons or changes the shape-tuning of neurons responsible for percepts of 3-D shapes.  相似文献   

4.
M Schaefer  HJ Heinze  M Rotte 《PloS one》2012,7(8):e42308

Background

An increasing body of evidence has demonstrated that in contrast to the classic understanding the primary somatosensory cortex (SI) reflects merely seen touch (in the absence of any real touch on the own body). Based on these results it has been discussed that SI may play a role in understanding touch seen on other bodies. In order to further examine this understanding of observed touch, the current study aimed to test if mirror-like responses in SI are affected by the perspective of the seen touch. Thus, we presented touch on a hand and close to the hand either in first-person-perspective or in third-person-perspective.

Principal Findings

Results of functional magnetic resonance imaging (fMRI) revealed stronger vicarious brain responses in SI/BA2 for touch seen in first-person-perspective. Surprisingly, the third-person viewpoint revealed activation in SI both when subjects viewed a hand being stimulated as well as when the space close to the hand was being touched.

Conclusions/Significance

Based on these results we conclude that vicarious somatosensory responses in SI/BA2 are affected by the viewpoint of the seen hand. Furthermore, we argue that mirror-like responses in SI do not only reflect seen touch, but also the peripersonal space surrounding this body (in third-person-perspective). We discuss these findings with recent studies on mirror responses for action observation in peripersonal space.  相似文献   

5.
Young children do not integrate visual and haptic form information   总被引:1,自引:0,他引:1  
Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability [1, 2]. When does this capacity for crossmodal integration develop? Here, we show that prior to 8 years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions in which the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, whereas for orientation discrimination, vision dominates. By 8-10 years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources.  相似文献   

6.
Enabled by the remarkable dexterity of the human hand, specialized haptic exploration is a hallmark of object perception by touch. Haptic exploration normally takes place in a spatial world that is three-dimensional; nevertheless, stimuli of reduced spatial dimensionality are also used to display spatial information. This paper examines the consequences of full (three-dimensional) versus reduced (two-dimensional) spatial dimensionality for object processing by touch, particularly in comparison with vision. We begin with perceptual recognition of common human-made artefacts, then extend our discussion of spatial dimensionality in touch and vision to include faces, drawing from research on haptic recognition of facial identity and emotional expressions. Faces have often been characterized as constituting a specialized input for human perception. We find that contrary to vision, haptic processing of common objects is impaired by reduced spatial dimensionality, whereas haptic face processing is not. We interpret these results in terms of fundamental differences in object perception across the modalities, particularly the special role of manual exploration in extracting a three-dimensional structure.  相似文献   

7.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

8.
Stimuli from different sensory modalities are thought to be processed initially in distinct unisensory brain areas prior to convergence in multisensory areas. However, signals in one modality can influence the processing of signals from other modalities and recent studies suggest this cross-modal influence may occur early on, even in ‘unisensory’ areas. Some recent psychophysical studies have shown specific cross-modal effects between touch and vision during binocular rivalry, but these cannot completely rule out a response bias. To test for genuine cross-modal integration of haptic and visual signals, we investigated whether congruent haptic input could influence visual contrast sensitivity compared to incongruent haptic input in three psychophysical experiments using a two-interval, two-alternative forced-choice method to eliminate response bias. The initial experiment demonstrated that contrast thresholds for a visual grating were lower when exploring a haptic grating that shared the same orientation compared to an orthogonal orientation. Two subsequent experiments mapped the orientation and spatial frequency tunings for the congruent haptic facilitation of vision, finding a clear orientation tuning effect but not a spatial frequency tuning. In addition to an increased contrast sensitivity for iso-oriented visual-haptic gratings, we found a significant loss of sensitivity for orthogonally oriented visual-haptic gratings. We conclude that the tactile influence on vision is a result of a tactile input to orientation-tuned visual areas.  相似文献   

9.
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.  相似文献   

10.
Brain areas exist that appear to be specialized for the coding of visual space surrounding the body (peripersonal space). In marked contrast to neurons in earlier visual areas, cells have been reported in parietal and frontal lobes that effectively respond only when visual stimuli are located in spatial proximity to a particular body part (for example, face, arm or hand) [1-4]. Despite several single-cell studies, the representation of near visual space has scarcely been investigated in humans. Here we focus on the neuropsychological phenomenon of visual extinction following unilateral brain damage. Patients with this disorder may respond well to a single stimulus in either visual field; however, when two stimuli are presented concurrently, the contralesional stimulus is disregarded or poorly identified. Extinction is commonly thought to reflect a pathological bias in selective vision favoring the ipsilesional side under competitive conditions, as a result of the unilateral brain lesion [5-7]. We examined a parietally damaged patient (D.P.) to determine whether visual extinction is modulated by the position of the hands in peripersonal space. We measured the severity of visual extinction in a task which held constant visual and spatial information about stimuli, while varying the distance between hands and stimuli. We found that selection in the affected visual field was remarkably more efficient when visual events were presented in the space near the contralesional finger than far from it. However, the amelioration of extinction dissolved when hands were covered from view, implying that the effect of hand position was not mediated purely through proprioception. These findings illustrate the importance of the spatial relationship between hand position and object location for the internal construction of visual peripersonal space in humans.  相似文献   

11.
Categorization and categorical perception have been extensively studied, mainly in vision and audition. In the haptic domain, our ability to categorize objects has also been demonstrated in earlier studies. Here we show for the first time that categorical perception also occurs in haptic shape perception. We generated a continuum of complex shapes by morphing between two volumetric objects. Using similarity ratings and multidimensional scaling we ensured that participants could haptically discriminate all objects equally. Next, we performed classification and discrimination tasks. After a short training with the two shape categories, both tasks revealed categorical perception effects. Training leads to between-category expansion resulting in higher discriminability of physical differences between pairs of stimuli straddling the category boundary. Thus, even brief training can alter haptic representations of shape. This suggests that the weights attached to various haptic shape features can be changed dynamically in response to top-down information about class membership.  相似文献   

12.
《Biophysical journal》2022,121(23):4740-4747
Touch allows us to gather abundant information in the world around us. However, how sensory cells embedded in the fingers convey texture information into their firing patterns is still poorly understood. Here, we develop an electromechanical model for roughness perception by incorporating main ingredients such as voltage-gated ion channels, active ion pumps, mechanosensitive channels, and cell deformation. The model reveals that sensory cells can convey texture wavelengths into the period of their firing patterns as the finger slides across object surfaces, but they can only convey a limited range of texture wavelengths. We also show that an increase in sliding speed broadens the decoding wavelength range at the cost of reduction of lower perception limits. Thus, a smaller sliding speed and a bigger contact force may be needed to successfully discern a smooth surface, consistent with previous psychophysical observations. Moreover, we show that cells with slowly adapting mechanosensitive channels can still fire action potentials under static loadings, indicating that slowly adapting mechanosensitive channels may contribute to the perception of coarse textures under static touch. Our work thus provides a new theoretical framework to study roughness perception and may have important implications for the design of electronic skin, artificial touch, and haptic interfaces.  相似文献   

13.
To react efficiently to potentially threatening stimuli, we have to be able to localize these stimuli in space. In daily life we are constantly moving so that our limbs can be positioned at the opposite side of space. Therefore, a somatotopic frame of reference is insufficient to localize nociceptive stimuli. Here we investigated whether nociceptive stimuli are mapped into a spatiotopic frame of reference, and more specifically a peripersonal frame of reference, which takes into account the position of the body limbs in external space, as well as the occurrence of external objects presented near the body. Two temporal order judgment (TOJ) experiments were conducted, during which participants had to decide which of two nociceptive stimuli, one applied to either hand, had been presented first while their hands were either uncrossed or crossed over the body midline. The occurrence of the nociceptive stimuli was cued by uninformative visual cues. We found that the visual cues prioritized the perception of nociceptive stimuli applied to the hand laying in the cued side of space, irrespective of posture. Moreover, the influence of the cues was smaller when they were presented far in front of participants’ hands as compared to when they were presented in close proximity. Finally, participants’ temporal sensitivity was reduced by changing posture. These findings are compatible with the existence of a peripersonal frame of reference for the localization of nociceptive stimuli. This allows for the construction of a stable representation of our body and the space closely surrounding our body, enabling a quick and efficient reaction to potential physical threats.  相似文献   

14.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

15.
Sensory information about body sway is used to drive corrective muscle action to keep the body's centre of mass located over the base of support provided by the feet. Loss of vision, by closing the eyes, usually results in increased sway as indexed by fluctuations (i.e. standard deviation, s.d.) in the velocity of a marker at C7 on the neck, s.d. dC7. Variability in the rate of change of centre of pressure (s.d. dCoP), which indexes corrective muscle action, also increases during upright standing with eyes closed. Light touch contact by the tip of one finger with an environmental surface can reduce s.d. dC7 and s.d. dCoP as effectively as opening the eyes. We review studies of light touch and balance and then describe a novel paradigm for studying the nature of somatosensory information contributing to effects of light touch balance. We show that 'light tight touch' contact by the index finger held in the thimble of a haptic device results in increased anteroposterior (AP) sway with entraining by either simple or complex AP sinusoidal oscillations of the haptic device. Moreover, sway is also increased when the haptic device plays back the pre-recorded AP sway path of another person. Cross-correlations between hand and C7 motion reveal a 176 ms lead for the hand and we conclude that light tight touch affords an efficient route for somatosensory feedback support for balance. Furthermore, we suggest that the paradigm has potential to contribute to the understanding of interpersonal postural coordination with light touch in future research.  相似文献   

16.
Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.  相似文献   

17.
Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the “50s cliff.” The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.  相似文献   

18.
This study examined effects of hand movement on visual perception of 3-D movement. I used an apparatus in which a cursor position in a simulated 3-D space and the position of a stylus on a haptic device could coincide using a mirror. In three experiments, participants touched the center of a rectangle in the visual display with the stylus of the force-feedback device. Then the rectangle''s surface stereoscopically either protruded toward a participant or indented away from the participant. Simultaneously, the stylus either pushed back participant''s hand, pulled away, or remained static. Visual and haptic information were independently manipulated. Participants judged whether the rectangle visually protruded or dented. Results showed that when the hand was pulled away, subjects were biased to perceive rectangles indented; however, when the hand was pushed back, no effect of haptic information was observed (Experiment 1). This effect persisted even when the cursor position was spatially separated from the hand position (Experiment 2). But, when participants touched an object different from the visual stimulus, this effect disappeared (Experiment 3). These results suggest that the visual system tried to integrate the dynamic visual and haptic information when they coincided cognitively, and the effect of haptic information on visually perceived depth was direction-dependent.  相似文献   

19.
van Elk M  Blanke O 《PloS one》2011,6(9):e24641
Previous studies have shown that tool use often modifies one's peripersonal space--i.e. the space directly surrounding our body. Given our profound experience with manipulable objects (e.g. a toothbrush, a comb or a teapot) in the present study we hypothesized that the observation of pictures representing manipulable objects would result in a remapping of peripersonal space as well. Subjects were required to report the location of vibrotactile stimuli delivered to the right hand, while ignoring visual distractors superimposed on pictures representing everyday objects. Pictures could represent objects that were of high manipulability (e.g. a cell phone), medium manipulability (e.g. a soap dispenser) and low manipulability (e.g. a computer screen). In the first experiment, when subjects attended to the action associated with the objects, a strong cross-modal congruency effect (CCE) was observed for pictures representing medium and high manipulability objects, reflected in faster reaction times if the vibrotactile stimulus and the visual distractor were in the same location, whereas no CCE was observed for low manipulability objects. This finding was replicated in a second experiment in which subjects attended to the visual properties of the objects. These findings suggest that the observation of manipulable objects facilitates cross-modal integration in peripersonal space.  相似文献   

20.
The brain mechanisms of adaptation to visual transposition are of increasing interest, not only for research on sensory-motor coordination, but also for neuropsychological rehabilitation. Sugita [Nature 380 (1996) 523] found that after adaptation to left-right reversed vision for one and a half months, monkey V1 neurons responded to stimuli presented not only in the contralateral visual field, but also in the ipsilateral visual field. To identify the underlying neuronal mechanisms of adaptation to visual transposition, we conducted fMRI and behavioral experiments for which four adult human subjects wore left-right reversing goggles for 35/39 days, and investigated: (1) whether ipsilateral V1 activation can be induced in human adult subjects; (2) if yes, when the ipsilateral activity starts, and what kind of behavioral/psychological changes occur accompanying the ipsilateral activity; (3) whether other visual cortices also show an ipsilateral activity change. The results of behavioral experiments showed that visuomotor coordinative function and internal representation of peripersonal space rapidly adapted to the left-right reversed vision within the first or second week. Accompanying these behavioral changes, we found that both primary (V1) and extrastriate (MT/MST) visual cortex in human adults responded to visual stimuli presented in the ipsilateral visual field. In addition, the ipsilateral activity started much sooner than the one and a half months, which had been expected from the monkey neurophysiological study. The results of the present study serve as physiological evidence of large-scale, cross-hemisphere, cerebral plasticity that exists even in adult human brain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号