首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Categorization and categorical perception have been extensively studied, mainly in vision and audition. In the haptic domain, our ability to categorize objects has also been demonstrated in earlier studies. Here we show for the first time that categorical perception also occurs in haptic shape perception. We generated a continuum of complex shapes by morphing between two volumetric objects. Using similarity ratings and multidimensional scaling we ensured that participants could haptically discriminate all objects equally. Next, we performed classification and discrimination tasks. After a short training with the two shape categories, both tasks revealed categorical perception effects. Training leads to between-category expansion resulting in higher discriminability of physical differences between pairs of stimuli straddling the category boundary. Thus, even brief training can alter haptic representations of shape. This suggests that the weights attached to various haptic shape features can be changed dynamically in response to top-down information about class membership.  相似文献   

2.
人手指柔性触觉感知的记忆特性   总被引:2,自引:0,他引:2  
Liu J  Song AG 《生理学报》2007,59(3):387-392
触觉再现技术是当前虚拟现实和远程操作机器人领域的前沿,而柔性触觉则是其重要的研究内容。触觉再现接口的设计需要充分研究人手的触觉感知特性。本文在柔性触觉装置上研究了人手指柔性触觉记忆特性。先通过回忆性实验确定人手指的柔性触觉记忆容量,在记忆容量范围内又进行了再认性实验,对人手指的柔性触觉记忆反应时间进行分析。本实验方法简单有效,得出的结论不仅可以用来改进触觉再现装置的设计,而且为触觉再现技术的研究提供了生理学依据。  相似文献   

3.
Enabled by the remarkable dexterity of the human hand, specialized haptic exploration is a hallmark of object perception by touch. Haptic exploration normally takes place in a spatial world that is three-dimensional; nevertheless, stimuli of reduced spatial dimensionality are also used to display spatial information. This paper examines the consequences of full (three-dimensional) versus reduced (two-dimensional) spatial dimensionality for object processing by touch, particularly in comparison with vision. We begin with perceptual recognition of common human-made artefacts, then extend our discussion of spatial dimensionality in touch and vision to include faces, drawing from research on haptic recognition of facial identity and emotional expressions. Faces have often been characterized as constituting a specialized input for human perception. We find that contrary to vision, haptic processing of common objects is impaired by reduced spatial dimensionality, whereas haptic face processing is not. We interpret these results in terms of fundamental differences in object perception across the modalities, particularly the special role of manual exploration in extracting a three-dimensional structure.  相似文献   

4.
While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive) and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices.  相似文献   

5.
The reproduced tactile sensation of haptic interfaces usually selectively reproduces a certain object attribute, such as the object''s material reflected by vibration and its surface shape by a pneumatic nozzle array. Tactile biomechanics investigates the relation between responses to an external load stimulus and tactile perception and guides the design of haptic interface devices via a tactile mechanism. Focusing on the pneumatic haptic interface, we established a fluid–structure interaction-based biomechanical model of responses to static and dynamic loads and conducted numerical simulation and experiments. This model provides a theoretical basis for designing haptic interfaces and reproducing tactile textures.  相似文献   

6.
Gori M  Sciutti A  Burr D  Sandini G 《PloS one》2011,6(10):e25599
It has long been suspected that touch plays a fundamental role in the calibration of visual perception, and much recent evidence supports this idea. However, as the haptic exploration workspace is limited by the kinematics of the body, the contribution of haptic information to the calibration process should occur only within the region of the haptic workspace reachable by a limb (peripersonal space). To test this hypothesis we evaluated visual size perception and showed that it is indeed more accurate inside the peripersonal space. We then show that allowing subjects to touch the (unseen) stimulus after observation restores accurate size perception; the accuracy persists for some time, implying that calibration has occurred. Finally, we show that observing an actor grasp the object also produces accurate (and lasting) size perception, suggesting that the calibration can also occur indirectly by observing goal-directed actions, implicating the involvement of the "mirror system".  相似文献   

7.
It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.  相似文献   

8.
Noninformative vision improves haptic spatial perception   总被引:10,自引:0,他引:10  
Previous studies have attempted to map somatosensory space via haptic matching tasks and have shown that individuals make large and systematic matching errors, the magnitude and angular direction of which vary systematically through the workspace. Based upon such demonstrations, it has been suggested that haptic space is non-Euclidian. This conclusion assumes that spatial perception is modality specific, and it largely ignores the fact that tactile matching tasks involve active, exploratory arm movements. Here we demonstrate that, when individuals match two bar stimuli (i.e., make them parallel) in circumstances favoring extrinsic (visual) coordinates, providing noninformative visual information significantly increases the accuracy of haptic perception. In contrast, when individuals match the same bar stimuli in circumstances favoring the coding of movements in intrinsic (limb-based) coordinates, providing identical noninformative visual information either has no effect or leads to the decreased accuracy of haptic perception. These results are consistent with optimal integration models of sensory integration in which the weighting given to visual and somatosensory signals depends upon the precision of the visual and somatosensory information and provide important evidence for the task-dependent integration of visual and somatosensory signals during the construction of a representation of peripersonal space.  相似文献   

9.
This study examined effects of hand movement on visual perception of 3-D movement. I used an apparatus in which a cursor position in a simulated 3-D space and the position of a stylus on a haptic device could coincide using a mirror. In three experiments, participants touched the center of a rectangle in the visual display with the stylus of the force-feedback device. Then the rectangle''s surface stereoscopically either protruded toward a participant or indented away from the participant. Simultaneously, the stylus either pushed back participant''s hand, pulled away, or remained static. Visual and haptic information were independently manipulated. Participants judged whether the rectangle visually protruded or dented. Results showed that when the hand was pulled away, subjects were biased to perceive rectangles indented; however, when the hand was pushed back, no effect of haptic information was observed (Experiment 1). This effect persisted even when the cursor position was spatially separated from the hand position (Experiment 2). But, when participants touched an object different from the visual stimulus, this effect disappeared (Experiment 3). These results suggest that the visual system tried to integrate the dynamic visual and haptic information when they coincided cognitively, and the effect of haptic information on visually perceived depth was direction-dependent.  相似文献   

10.
Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual "posterior sampling". In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.  相似文献   

11.
Meng X  Zaidi Q 《PloS one》2011,6(5):e19877
Vision generally provides reliable predictions for touch and motor-control, but some classes of stimuli evoke visual illusions. Using haptic feedback on virtual 3-D surfaces, we tested the function of touch in such cases. Our experiments show that in the perception of 3-D shapes from texture cues, haptic information can dominate vision in some cases, changing percepts qualitatively from convex to concave and concave to slant. The effects take time to develop, do not outlive the cessation of the feedback, are attenuated by distance, and drastically reduced by gaps in the surface. These dynamic shifts in qualitative perceived shapes could be invaluable in neural investigations that test whether haptic feedback modifies selective activation of neurons or changes the shape-tuning of neurons responsible for percepts of 3-D shapes.  相似文献   

12.
Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the “50s cliff.” The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.  相似文献   

13.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

14.
Haptic information stabilizes and destabilizes coordination dynamics   总被引:5,自引:0,他引:5  
Goal-directed, coordinated movements in humans emerge from a variety of constraints that range from 'high-level' cognitive strategies based on perception of the task to 'low-level' neuromuscular-skeletal factors such as differential contributions to coordination from flexor and extensor muscles. There has been a tendency in the literature to dichotomize these sources of constraint, favouring one or the other rather than recognizing and understanding their mutual interplay. In this experiment, subjects were required to coordinate rhythmic flexion and extension movements with an auditory metronome, the rate of which was systematically increased. When subjects started in extension on the beat of the metronome, there was a small tendency to switch to flexion at higher rates, but not vice versa. When subjects were asked to contact a physical stop, the location of which was either coincident with or counterphase to the auditory stimulus, two effects occurred. When haptic contact was coincident with sound, coordination was stabilized for both flexion and extension. When haptic contact was counterphase to the metronome, coordination was actually destabilized, with transitions occurring from both extension to flexion on the beat and from flexion to extension on the beat. These results reveal the complementary nature of strategic and neuromuscular factors in sensorimotor coordination. They also suggest the presence of a multimodal neural integration process - which is parametrizable by rate and context - in which intentional movement, touch and sound are bound into a single, coherent unit.  相似文献   

15.
One approach to gauge the complexity of the computational problem underlying haptic perception is to determine the number of dimensions needed to describe it. In vision, the number of dimensions can be estimated to be seven. This observation raises the question of what is the number of dimensions needed to describe touch. Only with certain simplified representations of mechanical interactions can this number be estimated, because it is in general infinite. Organisms must be sensitive to considerably reduced subsets of all possible measurements. These reductions are discussed by considering the sensing apparatuses of some animals and the underlying mechanisms of two haptic illusions.  相似文献   

16.
In an admittance-controlled haptic device, input forces are used to calculate the movement of the device. Although developers try to minimize delays, there will always be delays between the applied force and the corresponding movement in such systems, which might affect what the user of the device perceives. In this experiment we tested whether these delays in a haptic human-robot interaction influence the perception of mass. In the experiment an admittance-controlled manipulator was used to simulate various masses. In a staircase design subjects had to decide which of two virtual masses was heavier after gently pushing them leftward with the right hand in mid-air (no friction, no gravity). The manipulator responded as quickly as possible or with an additional delay (25 or 50 ms) to the forces exerted by the subject on the handle of the haptic device. The perceived mass was ~10% larger for a delay of 25 ms and ~20% larger for a delay of 50 ms. Based on these results, we estimated that the delays that are present in nowadays admittance-controlled haptic devices (up to 20ms) will give an increase in perceived mass which is smaller than the Weber fraction for mass (~10% for inertial mass). Additional analyses showed that the subjects’ decision on mass when the perceptual differences were small did not correlate with intuitive variables such as force, velocity or a combination of these, nor with any other measured variable, suggesting that subjects did not have a consistent strategy during guessing or used other sources of information, for example the efference copy of their pushes.  相似文献   

17.
Multisensory integration is a common feature of the mammalian brain that allows it to deal more efficiently with the ambiguity of sensory input by combining complementary signals from several sensory sources. Growing evidence suggests that multisensory interactions can occur as early as primary sensory cortices. Here we present incompatible visual signals (orthogonal gratings) to each eye to create visual competition between monocular inputs in primary visual cortex where binocular combination would normally take place. The incompatibility prevents binocular fusion and triggers an ambiguous perceptual response in which the two images are perceived one at a time in an irregular alternation. One key function of multisensory integration is to minimize perceptual ambiguity by exploiting cross-sensory congruence. We show that a haptic signal matching one of the visual alternatives helps disambiguate visual perception during binocular rivalry by both prolonging the dominance period of the congruent visual stimulus and by shortening its suppression period. Importantly, this interaction is strictly tuned for orientation, with a mismatch as small as 7.5° between visual and haptic orientations sufficient to annul the interaction. These results indicate important conclusions: first, that vision and touch interact at early levels of visual processing where interocular conflicts are first detected and orientation tunings are narrow, and second, that haptic input can influence visual signals outside of visual awareness, bringing a stimulus made invisible by binocular rivalry suppression back to awareness sooner than would occur without congruent haptic input.  相似文献   

18.
In this article we review current literature on cross-modal recognition and present new findings from our studies on object and scene recognition. Specifically, we address the questions of what is the nature of the representation underlying each sensory system that facilitates convergence across the senses and how perception is modified by the interaction of the senses. In the first set of our experiments, the recognition of unfamiliar objects within and across the visual and haptic modalities was investigated under conditions of changes in orientation (0 degrees or 180 degrees ). An orientation change increased recognition errors within each modality but this effect was reduced across modalities. Our results suggest that cross-modal object representations of objects are mediated by surface-dependent representations. In a second series of experiments, we investigated how spatial information is integrated across modalities and viewpoint using scenes of familiar, 3D objects as stimuli. We found that scene recognition performance was less efficient when there was either a change in modality, or in orientation, between learning and test. Furthermore, haptic learning was selectively disrupted by a verbal interpolation task. Our findings are discussed with reference to separate spatial encoding of visual and haptic scenes. We conclude by discussing a number of constraints under which cross-modal integration is optimal for object recognition. These constraints include the nature of the task, and the amount of spatial and temporal congruency of information across the modalities.  相似文献   

19.
Haptic interaction with virtual objects   总被引:3,自引:0,他引:3  
This paper considers interaction of the human arm with “virtual” objects simulated mechanically by a planar robot. Haptic perception of spatial properties of objects is distorted. It is reasonable to expect that it may be distorted in a geometrically consistent way. Three experiments were performed to quantify perceptual distortion of length, angle and orientation. We found that spatial perception is geometrically inconsistent across these perceptual tasks. Given that spatial perception is distorted, it is plausible that motor behavior may be distorted in a way consistent with perceptual distortion. In a fourth experiment, subjects were asked to draw circles. The results were geometrically inconsistent with those of the length perception experiment. Interestingly, although the results were inconsistent (statistically different), this difference was not strong (the relative distortion between the observed distributions was small). Some computational implications of this research for haptic perception and motor planning are discussed. Received: 12 February 1996 /Accepted in revised form: 6 May 1999  相似文献   

20.
The haptic sense of geometric properties such as the curvature of a contour is derived from somatosensory cues about the motions and forces experienced during exploratory actions. This study addressed the question of whether compliance, the relationship between force and displacement, influences haptic perception of curvature. Subjects traced a curved 30 cm long compliant contour by grasping the handle of a manipulandum and reported whether the contour curved towards or away from them. The contour at which there was equal probability of responding either way was taken to represent one that was sensed as being straight. The compliance of the contour was varied, being constant, greatest in the middle or greatest at the ends. Subjects exhibited a bias in what they sensed to be a straight edge. However, the actual handpath that was judged to be straight did not vary across the three compliance profiles. Our results rule out a hypothetical strategy in which an intended motion is planned and the actual trajectory is then inferred by sensing force feedback. Another strategy in which the force against the contour is controlled and the handpath is inferred from proprioceptive feedback is more consistent with the observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号