首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
A translating eye receives a radial pattern of motion that is centered on the direction of heading. If the eye is rotating and translating, visual and extraretinal signals help to cancel the rotation and to perceive heading correctly. This involves (1) an interaction between visual and eye movement signals and (2) a motion template stage that analyzes the pattern of visual motion. Early interaction leads to motion templates that integrate head-centered motion signals in the visual field. Integration of retinal motion signals leads to late interaction. Here, we show that retinal flow limits precision of heading. This result argues against an early, vector subtraction type of interaction, but is consistent with a late, gain field type of interaction with eye velocity signals and neurophysiological findings in area MST of the monkey.  相似文献   

2.
A theory is developed for determining the motion of an observer given the motion field over a full 360 degree image sphere. The method is based on the fact that for an observer translating without rotation, the projected circular motion field about any equator can be divided into disjoint semicircles of clockwise and counterclockwise flow, and on the observation that the effects of rotation decouple around the three equators defining the three principal axes of rotation. Since the effect of rotation is geometrical, the three rotational parameters can be determined independently by searching, in each case, for a rotational value for which the derotated equatorial motion field can be partitioned into 180 degree arcs of clockwise and counterclockwise flow. The direction of translation is also obtained from this analysis. This search is two dimensional in the motion parameters, and can be performed relatively efficiently. Because information is correlated over large distances, the method can be considered a pattern recognition rather than a numerical algorithm. The algorithm is shown to be robust and relatively insensitive to noise and to missing data. Both theoretical and empirical studies of the error sensitivity are presented. The theoretical analysis shows that for white noise of bounded magnitude M, the expected errors is at worst linearly proportional to M. Empirical tests demonstrate negligible error for perturbations of up to 20% in the input, and errors of less than 20% for perturbations of up to 200%.  相似文献   

3.
The visual ambiguity of a moving plane   总被引:1,自引:0,他引:1  
It is shown that the optic flow field arising from motion relative to a visually textured plane may be characterized by eight parameters that depend on the observer's linear and angular velocity and the coordinate vector of the plane. These three vectors are not, however, uniquely determined by the values of the eight parameters. First, the optic flow field does not supply independent values for the observer's speed and distance from the plane; it only gives the ratio of these two quantities. But more unexpectedly, the equations relating the observer's linear velocity and the plane's coordinate vector to the eight parameters are still satisfied if the two vectors are interchanged or reversed in direction, or both. So in addition to the veridical interpretation of the optic flow field there exist three spurious interpretations to be considered and if possible excluded. This purpose is served by the condition that an interpretation can be seriously entertained only if it attributes every image element to a light source in the observer's field of view. This condition immediately eliminates one of the spurious interpretations, and exhibits the other two as mutually inconsistent: one of them is tenable only if all the visible sources lie on the forward half of the plane (relative to the observer's linear velocity); the other only if they all lie on the backward half-plane. If the sources are distributed over both halves of the plane, only the veridical interpretation survives. Its computation involves solving a 3 X 3 eigenvalue problem derived from the flow field. If the upper two eigenvalues coincide, the observer must be moving directly towards the plane; if the lower two eigenvalues coincide, his motion must be directly away from it; in both cases the spurious interpretation merges with the veridical one. If all three eigenvalues are equal, it may be inferred that either the observer's linear velocity vanishes or the plane is infinitely distant.  相似文献   

4.
Tracking facilitates 3-D motion estimation   总被引:1,自引:0,他引:1  
The recently emerging paradigm of Active Vision advocates studying visual problems in form of modules that are directly related to a visual task for observers that are active. Along these lines, we are arguing that in many cases when an object is moving in an unrestricted manner (translation and rotation) in the 3D world, we are just interested in the motion's translational components. For a monocular observer, using only the normal flow — the spatio-temporal derivatives of the image intensity function — we solve the problem of computing the direction of translation and the time to collision. We do not use optical flow since its computation is an ill-posed problem, and in the general case it is not the same as the motion field — the projection of 3D motion on the image plane. The basic idea of our motion parameter estimation strategy lies in the employment of fixation and tracking. Fixation simplifies much of the computation by placing the object at the center of the visual field, and the main advantage of tracking is the accumulation of information over time. We show how tracking is accomplished using normal flow measurements and use it for two different tasks in the solution process. First it serves as a tool to compensate for the lack of existence of an optical flow field and thus to estimate the translation parallel to the image plane; and second it gathers information about the motion component perpendicular to the image plane.  相似文献   

5.
A computational approach to motion perception   总被引:10,自引:0,他引:10  
In this paper it is shown that the computation of the optical flow from a sequence of timevarying images is not, in general, an underconstrained problem. A local algorithm for the computation of the optical flow which uses second order derivatives of the image brightness pattern, and that avoids the aperture problem, is presented. The obtained optical flow is very similar to the true motion field — which is the vector field associated with moving features on the image plane — and can be used to recover 3D motion information. Experimental results on sequences of real images, together with estimates of relevant motion parameters, like time-to-crash for translation and angular velocity for rotation, are presented and discussed. Due to the remarkable accuracy which can be achieved in estimating motion parameters, the proposed method is likely to be very useful in a number of computer vision applications.  相似文献   

6.
Schieborr U  Rüterjans H 《Proteins》2001,45(3):207-218
Collective internal motions are known to be important for the function of biological macromolecules. It has been discussed in the past whether the application of superimposing algorithms to remove the overall motion from a structural ensemble introduces artificial correlations between distant atoms. Here we present a new method to eliminate residual rotation and translation from cartesian modes derived from a normal mode analysis or from a principal component analysis. Bias-free separation is based on the idea that the addition of modes of pure rotation/translation can compensate the residual overall motion. Removal of overall motion must reduce the "total amount of motion" (TAM) in the mode. Our algorithm allows to back-calculate revised covariance matrices. The approach was applied to two model systems that show residual overall motion, when analyzed using all atoms as reference for the superimposing algorithm. In both cases, our algorithm was capable of eliminating residual covariances caused by the overall motion, while retaining internal covariances even for very distant atoms. A structural ensemble obtained for a 13-ns molecular dynamics simulation of the protein Ribonuclease T1 showed a covariance matrix of the corrected modes with significantly sharper contours after applying the bias-free separation.  相似文献   

7.
When small flying insects go off their intended course, they use the resulting pattern of motion on their eye, or optic flow, to guide corrective steering. A change in heading generates a unique, rotational motion pattern and a change in position generates a translational motion pattern, and each produces corrective responses in the wingbeats. Any image in the flow field can signal rotation, but owing to parallax, only the images of nearby objects can signal translation. Insects that fly near the ground might therefore respond more strongly to translational optic flow that occurs beneath them, as the nearby ground will produce strong optic flow. In these experiments, rigidly tethered fruitflies steered in response to computer-generated flow fields. When correcting for unintended rotations, flies weight the motion in their upper and lower visual fields equally. However, when correcting for unintended translations, flies weight the motion in the lower visual fields more strongly. These results are consistent with the interpretation that fruitflies stabilize by attending to visual areas likely to contain the strongest signals during natural flight conditions.  相似文献   

8.
We generated panoramic imagery by simulating a fly-like robot carrying an imaging sensor, moving in free flight through a virtual arena bounded by walls, and containing obstructions. Flight was conducted under closed-loop control by a bio-inspired algorithm for visual guidance with feedback signals corresponding to the true optic flow that would be induced on an imager (computed by known kinematics and position of the robot relative to the environment). The robot had dynamics representative of a housefly-sized organism, although simplified to two-degree-of-freedom flight to generate uniaxial (azimuthal) optic flow on the retina in the plane of travel. Surfaces in the environment contained images of natural and man-made scenes that were captured by the moving sensor. Two bio-inspired motion detection algorithms and two computational optic flow estimation algorithms were applied to sequences of image data, and their performance as optic flow estimators was evaluated by estimating the mutual information between outputs and true optic flow in an equatorial section of the visual field. Mutual information for individual estimators at particular locations within the visual field was surprisingly low (less than 1 bit in all cases) and considerably poorer for the bio-inspired algorithms that the man-made computational algorithms. However, mutual information between weighted sums of these signals and comparable sums of the true optic flow showed significant increases for the bio-inspired algorithms, whereas such improvement did not occur for the computational algorithms. Such summation is representative of the spatial integration performed by wide-field motion-sensitive neurons in the third optic ganglia of flies.  相似文献   

9.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

10.
Li BW  Xu Y  Li B  Diao YC 《生理科学进展》2002,33(4):317-321
自体运动时,分布在视野内的景物在视网膜上的像旋转,扩张/收缩,浃动,构成光流刺激,光流信息的检测对于人或动物确定前进的方向,速度至关重要,已成为运动信息加工的一个研究热点。本文概括介绍了近年来心理物理和生理学两方面有关光流信息加工研究的主要进展,并讨论了光流信息分析的神经机制。  相似文献   

11.
We have developed an algorithm for the estimation of cardiac motion from medical images. The algorithm exploits monogenic signal theory, recently introduced as an N-dimensional generalization of the analytic signal. The displacement is computed locally by assuming the conservation of the monogenic phase over time. A local affine displacement model replaces the standard translation model to account for more complex motions as contraction/expansion and shear. A coarse-to-fine B-spline scheme allows a robust and effective computation of the models parameters and a pyramidal refinement scheme helps handle large motions. Robustness against noise is increased by replacing the standard pointwise computation of the monogenic orientation with a more robust least-squares orientation estimate. This paper reviews the results obtained on simulated cardiac images from different modalities, namely 2D and 3D cardiac ultrasound and tagged magnetic resonance. We also show how the proposed algorithm represents a valuable alternative to state-of-the-art algorithms in the respective fields.  相似文献   

12.
Optic flow, the pattern of apparent motion elicited on the retina during movement, has been demonstrated to be widely used by animals living in the aerial habitat, whereas underwater optic flow has not been intensively studied so far. However optic flow would also provide aquatic animals with valuable information about their own movement relative to the environment; even under conditions in which vision is generally thought to be drastically impaired, e. g. in turbid waters. Here, we tested underwater optic flow perception for the first time in a semi-aquatic mammal, the harbor seal, by simulating a forward movement on a straight path through a cloud of dots on an underwater projection. The translatory motion pattern expanded radially out of a singular point along the direction of heading, the focus of expansion. We assessed the seal''s accuracy in determining the simulated heading in a task, in which the seal had to judge whether a cross superimposed on the flow field was deviating from or congruent with the actual focus of expansion. The seal perceived optic flow and determined deviations from the simulated heading with a threshold of 0.6 deg of visual angle. Optic flow is thus a source of information seals, fish and most likely aquatic species in general may rely on for e. g. controlling locomotion and orientation under water. This leads to the notion that optic flow seems to be a tool universally used by any moving organism possessing eyes.  相似文献   

13.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

14.
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell''s receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow.  相似文献   

15.
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.  相似文献   

16.
The brain is able to maintain a stable perception although the visual stimuli vary substantially on the retina due to geometric transformations and lighting variations in the environment. This paper presents a theory for achieving basic invariance properties already at the level of receptive fields. Specifically, the presented framework comprises (i) local scaling transformations caused by objects of different size and at different distances to the observer, (ii) locally linearized image deformations caused by variations in the viewing direction in relation to the object, (iii) locally linearized relative motions between the object and the observer and (iv) local multiplicative intensity transformations caused by illumination variations. The receptive field model can be derived by necessity from symmetry properties of the environment and leads to predictions about receptive field profiles in good agreement with receptive field profiles measured by cell recordings in mammalian vision. Indeed, the receptive field profiles in the retina, LGN and V1 are close to ideal to what is motivated by the idealized requirements. By complementing receptive field measurements with selection mechanisms over the parameters in the receptive field families, it is shown how true invariance of receptive field responses can be obtained under scaling transformations, affine transformations and Galilean transformations. Thereby, the framework provides a mathematically well-founded and biologically plausible model for how basic invariance properties can be achieved already at the level of receptive fields and support invariant recognition of objects and events under variations in viewpoint, retinal size, object motion and illumination. The theory can explain the different shapes of receptive field profiles found in biological vision, which are tuned to different sizes and orientations in the image domain as well as to different image velocities in space-time, from a requirement that the visual system should be invariant to the natural types of image transformations that occur in its environment.  相似文献   

17.
《Médecine Nucléaire》2007,31(4):153-159
Respiratory motion reduces overall qualitative and quantitative accuracy in emission tomography imaging. The impact of respiratory motion has been further highlighted in the use of multi-modality imaging devices, where differences in respiratory conditions between the acquisition of anatomical and functional datasets can lead to significant artefacts. Current state of the art in accounting for such effects is the use of respiratory-gated acquisitions. Although such acquisitions may lead to a certain reduction in respiratory motion effects, the improvement is reduced as a result of using only part of the available data to reconstruct the individual gated frames. Approaches to correct the differences in the respiratory motion between the individual gated frames, in order to allow their combination, can be divided in two categories, namely, image or raw data based. The image-based approaches make use of registration algorithms to realign the gated images and, subsequently, sum them together; while the raw data approaches, based on the incorporation of transformations, account for differences in the respiratory motion between individual frames, either prior or during the reconstruction of all of the acquired data. Previous research in this field has demonstrated that a non-rigid local-based model leads to better results compared with an affine model in accounting for respiratory motion between gated frames. In addition, a superior image contrast can be obtained by incorporating the necessary transformation in the reconstruction process in comparison to an image-based approach.  相似文献   

18.
Within biologically constrained models of heading and complex motion processing, localization of the center-of-motion (COM) is typically an implicit property arising from the precise computation of radial motion direction associated with an observers forward self-motion. In the work presented here we report psychophysical data from a motion-impaired stroke patient, GZ, whose pattern of visual motion deficits is inconsistent with this view. We show that while GZ is able to discriminate direction in circular motions she is unable to discriminate direction in radial motion patterns. GZs inability to discriminate radial motion is in stark contrast with her ability to localize the COM in such stimuli and suggests that recovery of the COM does not necessarily require an explicit representation of radial motion direction. We propose that this dichotomy can be explained by a circular template mechanism that minimizes a global motion error relative to the visual motion input, and we demonstrate that a sparse population of such templates is computationally sufficient to account for human psychophysical performance in general and in particular, explains GZs performance. Recent re-analysis of the predicted receptive field structures in several existing heading models provides additional support for this type of circular template mechanism and suggests the human visual system may have available circular motion mechanisms for heading estimation.  相似文献   

19.
Ground-nesting wasps (Odynerus spinipes, Eumenidae) perform characteristic zig-zag flight manoeuvres when they encounter a novel object in the vicinity of their nests. We analysed flight parameters and flight control mechanisms and reconstructed the optical flow fields which the wasps generate by these flight manoeuvres. During zig-zag flights, the wasps move sideways and turn to keep the object in their frontal visual field. Their turning speed is controlled by the relative motion between object and background. We find that the wasps adjust their rotational and translational speed in such a way as to produce a specific vortex field of image motion that is centred on the novel object. As a result, differential image motion and changes in the direction of motion vectors are maximal in the vicinity and at the edges of the object. Zig-zag flights thus seem to be a `depth from motion' procedure for the extraction of object-related depth information. Accepted: 31 August 1997  相似文献   

20.
The control of self-motion is a basic, but complex task for both technical and biological systems. Various algorithms have been proposed that allow the estimation of self-motion from the optic flow on the eyes. We show that two apparently very different approaches to solve this task, one technically and one biologically inspired, can be transformed into each other under certain conditions. One estimator of self-motion is based on a matched filter approach; it has been developed to describe the function of motion sensitive cells in the fly brain. The other estimator, the Koenderink and van Doorn (KvD) algorithm, was derived analytically with a technical background. If the distances to the objects in the environment can be assumed to be known, the two estimators are linear and equivalent, but are expressed in different mathematical forms. However, for most situations it is unrealistic to assume that the distances are known. Therefore, the depth structure of the environment needs to be determined in parallel to the self-motion parameters and leads to a non-linear problem. It is shown that the standard least mean square approach that is used by the KvD algorithm leads to a biased estimator. We derive a modification of this algorithm in order to remove the bias and demonstrate its improved performance by means of numerical simulations. For self-motion estimation it is beneficial to have a spherical visual field, similar to many flying insects. We show that in this case the representation of the depth structure of the environment derived from the optic flow can be simplified. Based on this result, we develop an adaptive matched filter approach for systems with a nearly spherical visual field. Then only eight parameters about the environment have to be memorized and updated during self-motion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号