首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The elevated moon usually appears smaller than the horizon moon of equal angular size. This is the moon illusion. Distance cues may enable the perceptual system to place the horizon moon at an effectively greater distance than the elevated moon, thus making it appear as larger. This explanation is related to the size-distance invariance hypothesis. However, the larger horizon moon is usually judged as closer than the smaller zenith moon. A bias to expect an apparently large object to be closer than a smaller object may account for this conflict. We designed experiments to determine if unbiased sensitivity to illusory differences in the size and distance of the moon (as measured by d') is consistent with SDIH. A moon above a 'terrain' was compared in both distance and size to an infinitely distant moon in empty space (the reduction moon). At a short distance the terrain moon was adjudged as both closer and smaller than the reduction moon. But these differences could not be detected at somewhat greater distances. At still greater distances the terrain moon was perceived as both more distant and larger than the reduction moon. The distances at which these transitions occurred were essentially the same for both distance and size discrimination tasks, thus supporting SDIH.  相似文献   

2.
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.  相似文献   

3.
Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content.This article is part of the themed issue ‘Vision in our three-dimensional world’.  相似文献   

4.
We examined Emmert's law by measuring the perceived size of an afterimage and the perceived distance of the surface on which the afterimage was projected in actual and virtual environments. The actual environment consisted of a corridor with ample cues as to distance and depth. The virtual environment was made from the CAVE of a virtual reality system. The afterimage, disc-shaped and one degree in diameter, was produced by flashing with an electric photoflash. The observers were asked to estimate the perceived distance to surfaces located at various physical distances (1 to 24 m) by the magnitude estimation method and to estimate the perceived size of the afterimage projected on the surfaces by a matching method. The results show that the perceived size of the afterimage was directly proportional to the perceived distance in both environments; thus, Emmert's law holds in virtual as well as actual environments. We suggest that Emmert's law is a specific case of a functional principle of distance scaling by the visual system.  相似文献   

5.
The amount of depth perceived from a fixed pattern of horizontal disparities varies with viewing distance. We investigated whether thresholds for discriminating stereoscopic corrugations at a range of spatial frequencies were also affected by viewing distance or whether they were determined solely by the angular disparity in the stimulus prior to scaling. Although thresholds were found to be determined primarily by disparity over a broad range of viewing distances, they were on average a factor of two higher at the shortest viewing distance (28.5 cm) than at larger viewing distances (57 to 450 cm). We found the same pattern of results when subjects' accommodation was arranged to be the same at all viewing distances. The change in thresholds at close distances is in the direction expected if subjects' performance is limited by a minimum perceived depth.  相似文献   

6.
Stereo or ‘3D’ vision is an important but costly process seen in several evolutionarily distinct lineages including primates, birds and insects. Many selective advantages could have led to the evolution of stereo vision, including range finding, camouflage breaking and estimation of object size. In this paper, we investigate the possibility that stereo vision enables praying mantises to estimate the size of prey by using a combination of disparity cues and angular size cues. We used a recently developed insect 3D cinema paradigm to present mantises with virtual prey having differing disparity and angular size cues. We predicted that if they were able to use these cues to gauge the absolute size of objects, we should see evidence for size constancy where they would strike preferentially at prey of a particular physical size, across a range of simulated distances. We found that mantises struck most often when disparity cues implied a prey distance of 2.5 cm; increasing the implied distance caused a significant reduction in the number of strikes. We, however, found no evidence for size constancy. There was a significant interaction effect of the simulated distance and angular size on the number of strikes made by the mantis but this was not in the direction predicted by size constancy. This indicates that mantises do not use their stereo vision to estimate object size. We conclude that other selective advantages, not size constancy, have driven the evolution of stereo vision in the praying mantis.This article is part of the themed issue ‘Vision in our three-dimensional world’.  相似文献   

7.
Colour constancy is generally assumed to arise from a combination of perceptual constancy mechanisms operating to partially discount illumination changes and relational mechanisms involved in judging the colour relationships between object surfaces. Here we examined the characteristics of these mechanisms using a 'yes/no' task. Subjects judged whether a target colour patch embedded in an array of coloured patches (a) stayed the same across a simulated temporal illuminant change (local colour judgement), or (b) changed in a manner consistent with the illuminant change (relational colour judgement). The colour of the target patch remained constant in one-third of the trials, changed in accord with the illuminant shift in another third, and shifted partially with the illuminant change in the remaining third. We found that perceptual constancy was relatively weak and relational constancy strong, as assessed using a modified colour constancy index. Randomising the spatial positions of coloured patches across the illuminant change did not affect subjects' constancy indices. Application of signal detection analysis revealed some otherwise hidden effects. In the case of relational judgements, subjects adopted more conservative criteria (fewer true and false positives) with randomisation, maintaining a constant level of discrimination performance (d'). For local judgements, randomisation led to small increases in performance but no changes in criteria. We conclude that signal detection theory provides a useful tool to supplement conventional approaches to understanding colour constancy.  相似文献   

8.
The perceptual difference between stimuli can be regarded as distance within the perceptual space of the bee. The author used this assumption to determine the specific distance function, on the basis of which the differences in the individual perceptual parameters constituted the perceptual difference between the complex stimulus and the reference stimulus. The perceptual differences can be deduced only indirectly from the choice frequency. Consequently, it was necessary to establish a “calibration curve”, to deduce quantitatively the perceptual difference from the choice frequency. The resulting hyperbolic curves for the parameters “brightness” and “size” were almost identical (Fig. 2). The perceptual difference between the complex stimulus and the reference stimulus is greater than one would expect in an Euclidean space. Rather it is the sum of the distances along the perceptual parameters which compose the complex stimulus (Fig. 3). Thus, the bee determines the perceived difference of composite stimuli which affect the perceptual parameters “brightness” and “size” in terms of the city-blockmetric.  相似文献   

9.
Training experiments were performed to investigate the ability of goldfish to discriminate objects differing in spatial depth. Tests on size constancy should give insight into the mechanisms of distance estimation. Goldfish were successfully trained to discriminate between two black disk stimuli of equal size but different distance from the tank wall. Each stimulus was presented in a white tube so that the fish could see only one stimulus at a time. For each of eight training stimulus distances, the just noticeable difference in distance was determined at a threshold criterion of 70% choice frequency. The ratio of the retinal image sizes between training stimulus and comparison stimulus at threshold was about constant. However, in contrast to Douglas et al. (Behav Brain Res 30:37–42, 1988), goldfish did not show size constancy in tests with stimuli of the same visual angle. This indicates that they did not estimate distance, but simply compared the retinal images under our experimental conditions. We did not find any indication for the use of accommodation as a depth cue. A patterned background at the rear end of the tubes did not have any effect, which, however, does not exclude the possibility that motion parallax is used as a depth cue under natural conditions.  相似文献   

10.
Perception relies on the response of populations of neurons in sensory cortex. How the response profile of a neuronal population gives rise to perception and perceptual discrimination has been conceptualized in various ways. Here we suggest that neuronal population responses represent information about our environment explicitly as Fisher information (FI), which is a local measure of the variance estimate of the sensory input. We show how this sensory information can be read out and combined to infer from the available information profile which stimulus value is perceived during a fine discrimination task. In particular, we propose that the perceived stimulus corresponds to the stimulus value that leads to the same information for each of the alternative directions, and compare the model prediction to standard models considered in the literature (population vector, maximum likelihood, maximum-a-posteriori Bayesian inference). The models are applied to human performance in a motion discrimination task that induces perceptual misjudgements of a target direction of motion by task irrelevant motion in the spatial surround of the target stimulus (motion repulsion). By using the neurophysiological insight that surround motion suppresses neuronal responses to the target motion in the center, all models predicted the pattern of perceptual misjudgements. The variation of discrimination thresholds (error on the perceived value) was also explained through the changes of the total FI content with varying surround motion directions. The proposed FI decoding scheme incorporates recent neurophysiological evidence from macaque visual cortex showing that perceptual decisions do not rely on the most active neurons, but rather on the most informative neuronal responses. We statistically compare the prediction capability of the FI decoding approach and the standard decoding models. Notably, all models reproduced the variation of the perceived stimulus values for different surrounds, but with different neuronal tuning characteristics underlying perception. Compared to the FI approach the prediction power of the standard models was based on neurons with far wider tuning width and stronger surround suppression. Our study demonstrates that perceptual misjudgements can be based on neuronal populations encoding explicitly the available sensory information, and provides testable neurophysiological predictions on neuronal tuning characteristics underlying human perceptual decisions.  相似文献   

11.
The constancy of perception of linear 3–40-cm sizes at a distance of 0.7 to 5.6 m was studied. The results of investigations carried out on 66 subjects are presented. Sizes less than 10 cm and more than 10 cm are perceived according to different laws. Large sizes, as they are moved away, are assessed according to two modes of perception: aconstantly (with a diminution of the size value) and constantly (with the preservation of the apparent size value). The sizes not exceeding 10 cm are reassessed with an increase in the distance to the extent that reevaluation of the small size value attains 40%. It is evident that only one modus of perception— hyperconstancy—exists for sizes less than 10 cm.  相似文献   

12.
For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.  相似文献   

13.
Weakly electric fish use active electrolocation for orientation at night. They emit electric signals (electric organ discharges) which generate an electrical field around their body. By sensing field distortions, fish can detect objects and analyze their properties. It is unclear, however, how accurately they can determine the distance of unknown objects. Four Gnathonemus petersii were trained in two-alternative forced-choice procedures to discriminate between two objects differing in their distances to a gate. The fish learned to pass through the gate behind which the corresponding object was farther away. Distance discrimination thresholds for different types of objects were determined. Locomotor and electromotor activity during distance measurement were monitored. Our results revealed that all individuals quickly learned to measure object distance irrespective of size, shape or electrical conductivity of the object material. However, the distances of hollow, water-filled cubes and spheres were consistently misjudged in comparison with solid or more angular objects, being perceived as farther away than they really were. As training continued, fish learned to compensate for these 'electrosensory illusions' and erroneous choices disappeared with time. Distance discrimination thresholds depended on object size and overall object distance. During distance measurement, the fish produced a fast regular rhythm of EOD discharges. A mechanisms for distance determination during active electrolocation is proposed.  相似文献   

14.
Weakly electric fish use active electrolocation for orientation at night. They emit electric signals (electric organ discharges) which generate an electrical field around their body. By sensing field distortions, fish can detect objects and analyze their properties. It is unclear, however, how accurately they can determine the distance of unknown objects. Four Gnathonemus petersii were trained in two-alternative forced-choice procedures to discriminate between two objects differing in their distances to a gate. The fish learned to pass through the gate behind which the corresponding object was farther away. Distance discrimination thresholds for different types of objects were determined. Locomotor and electromotor activity during distance measurement were monitored. Our results revealed that all individuals quickly learned to measure object distance irrespective of size, shape or electrical conductivity of the object material. However, the distances of hollow, water-filled cubes and spheres were consistently misjudged in comparison with solid or more angular objects, being perceived as farther away than they really were. As training continued, fish learned to compensate for these 'electrosensory illusions' and erroneous choices disappeared with time. Distance discrimination thresholds depended on object size and overall object distance. During distance measurement, the fish produced a fast regular rhythm of EOD discharges. A mechanisms for distance determination during active electrolocation is proposed.  相似文献   

15.
Perceptual decision making has been widely studied using tasks in which subjects are asked to discriminate a visual stimulus and instructed to report their decision with a movement. In these studies, performance is measured by assessing the accuracy of the participants’ choices as a function of the ambiguity of the visual stimulus. Typically, the reporting movement is considered as a mere means of reporting the decision with no influence on the decision-making process. However, recent studies have shown that even subtle differences of biomechanical costs between movements may influence how we select between them. Here we investigated whether this purely motor cost could also influence decisions in a perceptual discrimination task in detriment of accuracy. In other words, are perceptual decisions only dependent on the visual stimulus and entirely orthogonal to motor costs? Here we show the results of a psychophysical experiment in which human subjects were presented with a random dot motion discrimination task and asked to report the perceived motion direction using movements of different biomechanical cost. We found that the pattern of decisions exhibited a significant bias towards the movement of lower cost, even when this bias reduced performance accuracy. This strongly suggests that motor costs influence decision making in visual discrimination tasks for which its contribution is neither instructed nor beneficial.  相似文献   

16.
The identity of an object is a fixed property, independent of where it appears, and an effective visual system should capture this invariance [1-3]. However, we now report that the perceived gender of a face is strongly biased toward male or female at different locations in the visual field. The spatial pattern of these biases was distinctive and stable for each individual. Identical neutral faces looked different when they were presented simultaneously at locations maximally biased to opposite genders. A similar effect was observed for perceived age of faces. We measured the magnitude of this perceptual heterogeneity for four other visual judgments: perceived aspect ratio, orientation discrimination, spatial-frequency discrimination, and color discrimination. The effect was sizeable for the aspect ratio task but substantially smaller for the other three tasks. We also evaluated perceptual heterogeneity for facial gender and orientation tasks at different spatial scales. Strong heterogeneity was observed even for the orientation task when tested at small scales. We suggest that perceptual heterogeneity is a general property of visual perception and results from undersampling of the visual signal at spatial scales that are small relative to the size of the receptive fields associated with each visual attribute.  相似文献   

17.
Young children do not integrate visual and haptic form information   总被引:1,自引:0,他引:1  
Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability [1, 2]. When does this capacity for crossmodal integration develop? Here, we show that prior to 8 years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions in which the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, whereas for orientation discrimination, vision dominates. By 8-10 years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources.  相似文献   

18.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

19.
During mate choice, receivers often assess the magnitude (duration, size, etc.) of signals that vary along a continuum and reflect variation in signaller quality. It is generally assumed that receivers assess this variation linearly, meaning each difference in signalling trait between signallers results in a commensurate change in receiver response. However, increasing evidence shows receivers can respond to signals non-linearly, for example through Weber's Law of proportional processing, where discrimination between stimuli is based on proportional, rather than absolute, differences in magnitude. We quantified mate preferences of female green swordtail fish, Xiphophorus hellerii, for pairs of males differing in body size. Preferences for larger males were better predicted by the proportional difference between males (proportional processing) than the absolute difference (linear processing). This demonstration of proportional processing of a visual signal implies that receiver perception may be an important mechanism selecting against the evolution of ever-larger signalling traits.  相似文献   

20.
Auditory streaming and visual plaids have been used extensively to study perceptual organization in each modality. Both stimuli can produce bistable alternations between grouped (one object) and split (two objects) interpretations. They also share two peculiar features: (i) at the onset of stimulus presentation, organization starts with a systematic bias towards the grouped interpretation; (ii) this first percept has 'inertia'; it lasts longer than the subsequent ones. As a result, the probability of forming different objects builds up over time, a landmark of both behavioural and neurophysiological data on auditory streaming. Here we show that first percept bias and inertia are independent. In plaid perception, inertia is due to a depth ordering ambiguity in the transparent (split) interpretation that makes plaid perception tristable rather than bistable: experimental manipulations removing the depth ambiguity suppressed inertia. However, the first percept bias persisted. We attempted a similar manipulation for auditory streaming by introducing level differences between streams, to bias which stream would appear in the perceptual foreground. Here both inertia and first percept bias persisted. We thus argue that the critical common feature of the onset of perceptual organization is the grouping bias, which may be related to the transition from temporally/spatially local to temporally/spatially global computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号