首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual "posterior sampling". In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.  相似文献   

2.
Territorial passerines presumably benefit from their ability to use auditory cues to judge the distance to singing conspecifics, by increasing the efficiency of their territorial defence. Here, we report data on the approach of male territorial chaffinches, Fringilla coelebs, to a loudspeaker broadcasting conspecific song simulating a rival at various distances by different amounts of song degradation. Songs were degraded digitally in a computer-simulated forest emulating distances of 0, 20, 40, 80 and 120 m. The approach distance of chaffinches towards the loudspeaker increased with increasing amounts of degradation indicating a perceptual representation of differences in distance of a sound source. We discuss the interindividual variation of male responses with respect to constraints resulting from random variation of ranging cues provided by the environmental song degradation, the perception accuracy and the decision rules. Copyright 2000 The Association for the Study of Animal Behaviour.  相似文献   

3.
We examined Emmert's law by measuring the perceived size of an afterimage and the perceived distance of the surface on which the afterimage was projected in actual and virtual environments. The actual environment consisted of a corridor with ample cues as to distance and depth. The virtual environment was made from the CAVE of a virtual reality system. The afterimage, disc-shaped and one degree in diameter, was produced by flashing with an electric photoflash. The observers were asked to estimate the perceived distance to surfaces located at various physical distances (1 to 24 m) by the magnitude estimation method and to estimate the perceived size of the afterimage projected on the surfaces by a matching method. The results show that the perceived size of the afterimage was directly proportional to the perceived distance in both environments; thus, Emmert's law holds in virtual as well as actual environments. We suggest that Emmert's law is a specific case of a functional principle of distance scaling by the visual system.  相似文献   

4.
Multiple cues contribute to the visual perception of an object's distance from the observer. The manner in which the nervous system combines these various cues is of considerable interest. Although it is accepted that image cues play a significant role in distance perception, controversy exists regarding the use of kinaesthetic information about the eyes' state of convergence. We used a perturbation technique to explore the contribution of vergence to visually based distance estimates as a function of both fixation distance and the availability of retinal information. Our results show that the nervous system increases the weighting given to vergence as (i) fixation distance becomes closer; and (ii) the available retinal image cues decrease. We also identified the presence of a strong contraction bias when distance cues were studied in isolation, but we argue that such biases do not suggest that vergence provides an ineffectual signal for near-space perception.  相似文献   

5.
Sensory reweighting is a characteristic of postural control functioning adopted to accommodate environmental changes. The use of mono or binocular cues induces visual reduction/increment of moving room influences on postural sway, suggesting a visual reweighting due to the quality of available sensory cues. Because in our previous study visual conditions were set before each trial, participants could adjust the weight of the different sensory systems in an anticipatory manner based upon the reduction in quality of the visual information. Nevertheless, in daily situations this adjustment is a dynamical process and occurs during ongoing movement. The purpose of this study was to examine the effect of visual transitions in the coupling between visual information and body sway in two different distances from the front wall of a moving room. Eleven young adults stood upright inside of a moving room in two distances (75 and 150 cm) wearing a liquid crystal lenses goggles, which allow individual lenses transition from opaque to transparent and vice-versa. Participants stood still during five minutes for each trial and the lenses status changed every one minute (no vision to binocular vision, no vision to monocular vision, binocular vision to monocular vision, and vice-versa). Results showed that farther distance and monocular vision reduced the effect of visual manipulation on postural sway. The effect of visual transition was condition dependent, with a stronger effect when transitions involved binocular vision than monocular vision. Based upon these results, we conclude that the increased distance from the front wall of the room reduced the effect of visual manipulation on postural sway and that sensory reweighting is stimulus quality dependent, with binocular vision producing a much stronger down/up-weighting than monocular vision.  相似文献   

6.
Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.  相似文献   

7.
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues.  相似文献   

8.
Understanding of adaptive behavior requires the precisely controlled presentation of multisensory stimuli combined with simultaneous measurement of multiple behavioral modalities. Hence, we developed a virtual reality apparatus that allows for simultaneous measurement of reward checking, a commonly used measure in associative learning paradigms, and navigational behavior, along with precisely controlled presentation of visual, auditory and reward stimuli. Rats performed a virtual spatial navigation task analogous to the Morris maze where only distal visual or auditory cues provided spatial information. Spatial navigation and reward checking maps showed experience-dependent learning and were in register for distal visual cues. However, they showed a dissociation, whereby distal auditory cues failed to support spatial navigation but did support spatially localized reward checking. These findings indicate that rats can navigate in virtual space with only distal visual cues, without significant vestibular or other sensory inputs. Furthermore, they reveal the simultaneous dissociation between two reward-driven behaviors.  相似文献   

9.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

10.
Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant''s experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups.  相似文献   

11.
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.  相似文献   

12.
ABSTRACT Detection distance is an important and common auxiliary variable measured during avian point count surveys. Distance data are used to determine the area sampled and to model the detection process using distance sampling theory. In densely forested habitats, visual detections of birds are rare, and most estimates of detection distance are based on auditory cues. Distance sampling theory assumes detection distances are measured accurately, but empirical validation of this assumption for auditory detections is lacking. We used a song playback system to simulate avian point counts with known distances in a forested habitat to determine the error structure of distance estimates based on auditory detections. We conducted field evaluations with 6 experienced observers both before and after distance estimation training. We conducted additional studies to determine the effect of height and speaker orientation (toward or away from observers) on distance estimation error. Distance estimation errors for all evaluations were substantial, although training reduced errors and bias in distance estimates by approximately 15%. Measurement errors showed a nonlinear relationship to distance. Our results suggest observers were not able to differentiate distances beyond 65 m. The height from which we played songs had no effect on distance estimation errors in this habitat. The orientation of the song source did have a large effect on distance estimation errors; observers generally doubled their distance estimates for songs played away from them compared with distance estimates for songs played directly toward them. These findings, which we based on realistic field conditions, suggest measures of uncertainty in distance estimates to auditory detections are substantially higher than assumed by most researchers. This means aural point count estimates of avian abundance based on distance methods deserve careful scrutiny because they are likely biased.  相似文献   

13.
We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.  相似文献   

14.
Vision is important for postural control as is shown by the Romberg quotient (RQ): with eyes closed, postural instability increases relative to eyes open (RQ = 2). Yet while fixating at far distance, postural stability is similar with eyes open and eyes closed (RQ = 1). Postural stability can be better with both eyes viewing than one eye, but such effect is not consistent among healthy subjects. The first goal of the study is to test the RQ as a function of distance for children with convergent versus divergent strabismus. The second goal is to test whether vision from two eyes relative to vision from one eye provides better postural stability. Thirteen children with divergent strabismus and eleven with convergent strabismus participated in this study. Posturtography was done with the Techno concept device. Experiment 1, four conditions: fixation at 40 cm and at 200 cm both with eyes open and eyes covered (evaluation of RQ). Experiment 2, six conditions: fixation at 40 cm and at 200 cm, with both eyes viewing or under monocular vision (dominant and non-dominant eye). For convergent strabismus, the groups mean value of RQ was 1.3 at near and 0.94 at far distance; for divergent, it was 1.06 at near and 1.68 at far. For all children, the surface of body sway was significantly smaller under both eyes viewing than monocular viewing (either eye). Increased RQ value at near for convergent and at far for divergent strabismus is attributed to the influence of the default strabismus angle and to better use of ocular motor signals. Vision with the two eyes improves postural control for both viewing distances and for both types of strabismus. Such benefit can be due to complementary mechanisms: larger visual field, better quality of fixation and vergence angle due to the use of visual inputs from both eyes.  相似文献   

15.
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant''s vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”.  相似文献   

16.
Desert ants of the genus Cataglyphis perform large-scale foraging excursions from which they return to their nest by path integration. They do so by integrating courses steered and the distances travelled into a continually updated home vector. While it is known that the angular orientation is based on skylight cues, it still is largely enigmatic how the ants measure distances travelled. We extended the ants' task into the third dimension by training them to walk within an array of uphill and downhill channels, and later testing them on flat terrain, or vice versa. In these tests the ants indicated homing distances that did not correspond to the distances actually travelled, but to the ground distances; that is, to the sum of the horizontal projections of the uphill and downhill segments of the ants' paths. These results suggest a much more sophisticated mechanism of distance estimation than hitherto thought. The ants must be able to measure the slopes of undulating terrain and to integrate this information into their "odometer" for the distance estimation process.  相似文献   

17.
Many blind people rely on echoes from self-produced sounds to assess their environment. It has been shown that human subjects can use echolocation for directional localization and orientation in a room, but echo-acoustic distance perception - e.g. to determine one''s position in a room - has received little scientific attention, and systematic studies on the influence of additional early reflections and exploratory head movements are lacking. This study investigates echo-acoustic distance discrimination in virtual echo-acoustic space, using the impulse responses of a real corridor. Six blindfolded sighted subjects and a blind echolocation expert had to discriminate between two positions in the virtual corridor, which differed by their distance to the front wall, but not to the lateral walls. To solve this task, participants evaluated echoes that were generated in real time from self-produced vocalizations. Across experimental conditions, we systematically varied the restrictions for head rotations, the subjects'' orientation in virtual space and the reference position. Three key results were observed. First, all participants successfully solved the task with discrimination thresholds below 1 m for all reference distances (0.75–4 m). Performance was best for the smallest reference distance of 0.75 m, with thresholds around 20 cm. Second, distance discrimination performance was relatively robust against additional early reflections, compared to other echolocation tasks like directional localization. Third, free head rotations during echolocation can improve distance discrimination performance in complex environmental settings. However, head movements do not necessarily provide a benefit over static echolocation from an optimal single orientation. These results show that accurate distance discrimination through echolocation is possible over a wide range of reference distances and environmental conditions. This is an important functional benefit of human echolocation, which may also play a major role in the calibration of auditory space representations.  相似文献   

18.
To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.  相似文献   

19.
The proprioceptive cues in the control of movement is recognized as playing a major role in postural control. However, little is known about its possible increased contribution to postural control consecutive to repetitive muscular activations. To test this, the short-term effects induced by a 1-legged exercise on 2-legged postural control with the eyes closed were assessed in healthy subjects. The center-of-pressure (CP) displacements obtained using a force platform were split into 2 elementary movements: center-of-gravity vertical projection (CGv) and the difference (CP - CGv). These movements assessed the net postural performance and the level of neuromuscular activity, respectively, and were processed afterward (a) through variances, mean velocity, and the average surface covered by the trajectories and (b) a fractional Brownian motion (fBm) modeling. The latter provides further information about how much the subject controls the movements and the spatiotemporal relation between the successive control mechanisms. No difference was found using the classical parameters. In contrast, fBm parameters showed statistically significant changes in postural control after 1-legged exercises: The spatial and temporal coordinates of the transition points for the CG movements along the anteroposterior axis are decreased. Because the body movement control does not rely on visual or vestibular cues, this ability to trigger the corrective process of the CG movements more quickly in the postexercise condition and once a more reduced distance has been covered emphasizes how prior muscular activation improves body movement detection. As a general rule, these data show that the motor systems control body motions better after repetitive stimulation of the sensory cues. These insights should be of interest in physical activities based on a precise muscular length control.  相似文献   

20.
While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive) and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号