首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stimuli from different sensory modalities are thought to be processed initially in distinct unisensory brain areas prior to convergence in multisensory areas. However, signals in one modality can influence the processing of signals from other modalities and recent studies suggest this cross-modal influence may occur early on, even in ‘unisensory’ areas. Some recent psychophysical studies have shown specific cross-modal effects between touch and vision during binocular rivalry, but these cannot completely rule out a response bias. To test for genuine cross-modal integration of haptic and visual signals, we investigated whether congruent haptic input could influence visual contrast sensitivity compared to incongruent haptic input in three psychophysical experiments using a two-interval, two-alternative forced-choice method to eliminate response bias. The initial experiment demonstrated that contrast thresholds for a visual grating were lower when exploring a haptic grating that shared the same orientation compared to an orthogonal orientation. Two subsequent experiments mapped the orientation and spatial frequency tunings for the congruent haptic facilitation of vision, finding a clear orientation tuning effect but not a spatial frequency tuning. In addition to an increased contrast sensitivity for iso-oriented visual-haptic gratings, we found a significant loss of sensitivity for orthogonally oriented visual-haptic gratings. We conclude that the tactile influence on vision is a result of a tactile input to orientation-tuned visual areas.  相似文献   

2.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

3.
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations.  相似文献   

4.
Young children do not integrate visual and haptic form information   总被引:1,自引:0,他引:1  
Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability [1, 2]. When does this capacity for crossmodal integration develop? Here, we show that prior to 8 years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions in which the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, whereas for orientation discrimination, vision dominates. By 8-10 years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources.  相似文献   

5.
人手指柔性触觉感知的记忆特性   总被引:2,自引:0,他引:2  
Liu J  Song AG 《生理学报》2007,59(3):387-392
触觉再现技术是当前虚拟现实和远程操作机器人领域的前沿,而柔性触觉则是其重要的研究内容。触觉再现接口的设计需要充分研究人手的触觉感知特性。本文在柔性触觉装置上研究了人手指柔性触觉记忆特性。先通过回忆性实验确定人手指的柔性触觉记忆容量,在记忆容量范围内又进行了再认性实验,对人手指的柔性触觉记忆反应时间进行分析。本实验方法简单有效,得出的结论不仅可以用来改进触觉再现装置的设计,而且为触觉再现技术的研究提供了生理学依据。  相似文献   

6.
Jakesch M  Carbon CC 《PloS one》2012,7(2):e31215

Background

Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities.

Methodology/Principal Findings

We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of “Need for Touch” data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE.

Conclusions/Significance

This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis.  相似文献   

7.
It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.  相似文献   

8.
Multisensory integration is a common feature of the mammalian brain that allows it to deal more efficiently with the ambiguity of sensory input by combining complementary signals from several sensory sources. Growing evidence suggests that multisensory interactions can occur as early as primary sensory cortices. Here we present incompatible visual signals (orthogonal gratings) to each eye to create visual competition between monocular inputs in primary visual cortex where binocular combination would normally take place. The incompatibility prevents binocular fusion and triggers an ambiguous perceptual response in which the two images are perceived one at a time in an irregular alternation. One key function of multisensory integration is to minimize perceptual ambiguity by exploiting cross-sensory congruence. We show that a haptic signal matching one of the visual alternatives helps disambiguate visual perception during binocular rivalry by both prolonging the dominance period of the congruent visual stimulus and by shortening its suppression period. Importantly, this interaction is strictly tuned for orientation, with a mismatch as small as 7.5° between visual and haptic orientations sufficient to annul the interaction. These results indicate important conclusions: first, that vision and touch interact at early levels of visual processing where interocular conflicts are first detected and orientation tunings are narrow, and second, that haptic input can influence visual signals outside of visual awareness, bringing a stimulus made invisible by binocular rivalry suppression back to awareness sooner than would occur without congruent haptic input.  相似文献   

9.
A characteristic of early visual processing is a reduction in the effective number of filter mechanisms acting in parallel over the visual field. In the detection of a line target differing in orientation from a background of lines, performance with brief displays appears to be determined by just two classes of orientation-sensitive filter, with preferred orientations close to the vertical and horizontal. An orientation signal represented as a linear combination of responses from such filters is shown to provide a quantitative prediction of the probability density function for identifying the perceived orientation of a target line. This prediction was confirmed in an orientation-matching experiment, which showed that the precision of orientation estimates was worst near the vertical and horizontal and best at about 30 degrees each side of the vertical, a result that contrasts with the classical oblique effect in vision, when scrutiny of the image is allowed. A comparison of predicted and observed frequency distributions showed that the hypothesized orientation signal was formed as an opponent combination and horizontal and vertical filter responses.  相似文献   

10.

Background

The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans'' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect.

Methodology/Principal Findings

Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks.  相似文献   

11.
Categorization and categorical perception have been extensively studied, mainly in vision and audition. In the haptic domain, our ability to categorize objects has also been demonstrated in earlier studies. Here we show for the first time that categorical perception also occurs in haptic shape perception. We generated a continuum of complex shapes by morphing between two volumetric objects. Using similarity ratings and multidimensional scaling we ensured that participants could haptically discriminate all objects equally. Next, we performed classification and discrimination tasks. After a short training with the two shape categories, both tasks revealed categorical perception effects. Training leads to between-category expansion resulting in higher discriminability of physical differences between pairs of stimuli straddling the category boundary. Thus, even brief training can alter haptic representations of shape. This suggests that the weights attached to various haptic shape features can be changed dynamically in response to top-down information about class membership.  相似文献   

12.
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants'' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants'' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.  相似文献   

13.
Martens S  Kandula M  Duncan J 《PloS one》2010,5(12):e15280

Background

Most people show a remarkable deficit to report the second of two targets when presented in close temporal succession, reflecting an attentional blink (AB). An aspect of the AB that is often ignored is that there are large individual differences in the magnitude of the effect. Here we exploit these individual differences to address a long-standing question: does attention to a visual target come at a cost for attention to an auditory target (and vice versa)? More specifically, the goal of the current study was to investigate a) whether individuals with a large within-modality AB also show a large cross-modal AB, and b) whether individual differences in AB magnitude within different modalities correlate or are completely separate.

Methodology/Principal Findings

While minimizing differential task difficulty and chances for a task-switch to occur, a significant AB was observed when targets were both presented within the auditory or visual modality, and a positive correlation was found between individual within-modality AB magnitudes. However, neither a cross-modal AB nor a correlation between cross-modal and within-modality AB magnitudes was found.

Conclusion/Significance

The results provide strong evidence that a major source of attentional restriction must lie in modality-specific sensory systems rather than a central amodal system, effectively settling a long-standing debate. Individuals with a large within-modality AB may be especially committed or focused in their processing of the first target, and to some extent that tendency to focus could cross modalities, reflected in the within-modality correlation. However, what they are focusing (resource allocation, blocking of processing) is strictly within-modality as it only affects the second target on within-modality trials. The findings show that individual differences in AB magnitude can provide important information about the modular structure of human cognition.  相似文献   

14.
The perception biases associated with manta tow estimates of the abundance of benthic organisms were investigated using artificial targets, the density, distribution and availability of which could be controlled. The proportion of targets which are counted by a mantatowed observer (sightability) and the precision of the resultant estimates of their abundance decreased as the targets were distributed more widely over a reef slope. The sightability of targets was enhanced when they were arranged at a high density or in relatively large groups of 9–11, or when they were located directly under the manta towed observer rather than at the edges of his visual field. Limiting the search width of a manta towed observer to about 9 m should improve manta tow estimates of target organisms. However, in practice this would be difficult for reef organisms such as Acanthaster planci because of the extreme three dimensionality of the reef surface relative to the depth of the water.  相似文献   

15.
Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance.  相似文献   

16.
Noninformative vision improves haptic spatial perception   总被引:10,自引:0,他引:10  
Previous studies have attempted to map somatosensory space via haptic matching tasks and have shown that individuals make large and systematic matching errors, the magnitude and angular direction of which vary systematically through the workspace. Based upon such demonstrations, it has been suggested that haptic space is non-Euclidian. This conclusion assumes that spatial perception is modality specific, and it largely ignores the fact that tactile matching tasks involve active, exploratory arm movements. Here we demonstrate that, when individuals match two bar stimuli (i.e., make them parallel) in circumstances favoring extrinsic (visual) coordinates, providing noninformative visual information significantly increases the accuracy of haptic perception. In contrast, when individuals match the same bar stimuli in circumstances favoring the coding of movements in intrinsic (limb-based) coordinates, providing identical noninformative visual information either has no effect or leads to the decreased accuracy of haptic perception. These results are consistent with optimal integration models of sensory integration in which the weighting given to visual and somatosensory signals depends upon the precision of the visual and somatosensory information and provide important evidence for the task-dependent integration of visual and somatosensory signals during the construction of a representation of peripersonal space.  相似文献   

17.
We tested whether the intervening time between multiple glances influences the independence of the resulting visual percepts. Observers estimated how many dots were present in brief displays that repeated one, two, three, four, or a random number of trials later. Estimates made farther apart in time were more independent, and thus carried more information about the stimulus when combined. In addition, estimates from different visual field locations were more independent than estimates from the same location. Our results reveal a retinotopic serial dependence in visual numerosity estimates, which may be a mechanism for maintaining the continuity of visual perception in a noisy environment.  相似文献   

18.
This study examined effects of hand movement on visual perception of 3-D movement. I used an apparatus in which a cursor position in a simulated 3-D space and the position of a stylus on a haptic device could coincide using a mirror. In three experiments, participants touched the center of a rectangle in the visual display with the stylus of the force-feedback device. Then the rectangle''s surface stereoscopically either protruded toward a participant or indented away from the participant. Simultaneously, the stylus either pushed back participant''s hand, pulled away, or remained static. Visual and haptic information were independently manipulated. Participants judged whether the rectangle visually protruded or dented. Results showed that when the hand was pulled away, subjects were biased to perceive rectangles indented; however, when the hand was pushed back, no effect of haptic information was observed (Experiment 1). This effect persisted even when the cursor position was spatially separated from the hand position (Experiment 2). But, when participants touched an object different from the visual stimulus, this effect disappeared (Experiment 3). These results suggest that the visual system tried to integrate the dynamic visual and haptic information when they coincided cognitively, and the effect of haptic information on visually perceived depth was direction-dependent.  相似文献   

19.
Question: What precision and accuracy of visual cover estimations can be achieved after repeated calibration with images of vegetation in which the true cover is known, and what factors influence the results? Methods: Digital images were created, in which the true cover of vegetation was digitally calculated. Fifteen observers made repeated estimates with immediate feedback on the true cover. The effects on precision and accuracy through time were evaluated with repeated proficiency tests. In a field trial, cover estimates, before and after calibration, were compared with point frequency data. Results: Even a short time of calibration greatly improves precision and accuracy of the estimates, and can also reduce the influence of different backgrounds, aggregation patterns and experience. Experienced observers had a stronger tendency to underestimate the cover of narrow‐leaved grasses before calibration. The field trial showed positive effects of computer‐based calibration on precision, in that it led to considerably less between‐observer variation for one of the two species groups. Conclusions: Computer‐aided calibration of vegetation cover estimation is simple, self‐explanatory and time‐efficient, and might possibly reduce biases and drifts in estimate levels over time. Such calibration can also reduce between‐observer variation in field estimates, at least for some species. However, the effects of calibration on estimations in the field must be further evaluated, especially for multilayered vegetation.  相似文献   

20.
Sixty subjects were tested to assign orientation to ten dot patterns differing in their overall form and the number of dots in the pattern. The patterns were presented in four different positions in the visual field and their orientation was estimated in two ways. It was demonstrated that the assignment of orientation did not depend on the position of the pattern in the visual field as well as on the method of estimation used. A quantitative measure for the elongation of a dot pattern is proposed which correlates with the degree of ambiguity in orientation estimation. The greater the elongation the smaller the standard deviation of the estimates given. The distributions of the estimates for the ten patterns were analyzed. It was shown that they can be presented as superpositions of two or more groups of normally distributed estimates determined by some salient characteristics of the stimuli. Data are discussed from the point of view that assignment of orientation to dot patterns reveals the existence of optimization mechanisms in human brain that extract perceptual invariants from external stimulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号