首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
 Efficient algorithms for image motion computation are important for computer vision applications and the modelling of biological vision systems. Intensity-based image motion computation proceeds in two stages: the convolution of linear spatiotemporal filter kernels with the image sequence, followed by the non-linear combination of the filter outputs. If the spatiotemporal extent of the filter kernels is large, then the convolution stage can be very intensive computationally. One effective means of reducing the storage required and computation involved in implementing the temporal convolutions is the introduction of recursive filtering. Non-recursive methods require the number of frames of the image sequence stored at any given time to be equal to the temporal extent of the slowest temporal filter. In contrast, recursive methods encode recent stimulus history implicitly in the values of a small number of variables updated through a series of feedback equations. Recursive filtering reduces the number of values stored in memory during convolution and the number of mathematical operations involved in computing the filters' outputs. This paper extends previous recursive implementations of gradient- and correlation-based motion analysis algorithms [Fleet DJ, Langley K (1995) IEEE PAMI 17: 61–67; Clifford CWG, Ibbotson MR, Langley K (1997) Vis Neurosci 14: 741–749], describing a recursive implementation of causal band-pass temporal filters suitable for use in energy- and phase-based algorithms for image motion computation. It is shown that the filters' temporal frequency tuning curves fit psychophysical estimates of the temporal properties of human visual filters [Hess RF, Snowden RJ (1992) Vision Res 32: 47–60]. Received: 20 April 1999 /Accepted in revised form: 8 November 1999  相似文献   

2.
 A scheme is presented that uses the self-motion of a robot, equipped with a single visual sensor, to navigate in a safe manner. The motion strategy used is modelled on the motion of insects that effectively have a single eye and must move in order to determine range. The essence of the strategy is to employ a zigzag motion in order to (a) estimate the range to objects and (b) know the safe distance of travel in the present direction. An example is presented of a laboratory robot moving in a cluttered environment. The results show that this motion strategy can be successfully employed in an autonomous robot to avoid collisions. Received: 17 August 1993/Accepted: 2 May 1994  相似文献   

3.
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors.  相似文献   

4.
A novel technique is presented for the computation of the parameters of egomotion of a mobile device, such as a robot or a mechanical arm, equipped with two visual sensors. Each sensor captures a panoramic view of the environment. We show that the parameters of egomotion can be computed by interpolating the position of the image captured by one of the sensors at the robot's present location, with respect to the images captured by the two sensors at the robot's previous location. The algorithm delivers the distance travelled and angle rotated, without the explicit measurement or integration of velocity fields. The result is obtained in a single step, without any iteration or successive approximation. Tests of the algorithm on real and synthetic images reveal an accuracy to within 5% of the actual motion. Implementation of the algorithm on a mobile robot reveals that stepwise rotation and translation can be measured to within 10% accuracy in a three-dimensional world of unknown structure. The position and orientation of the robot at the end of a 30-step trajectory can be estimated with accuracies of 5% and 5°, respectively.  相似文献   

5.
 The sensory weighting model is a general model of sensory integration that consists of three processing layers. First, each sensor provides the central nervous system (CNS) with information regarding a specific physical variable. Due to sensor dynamics, this measure is only reliable for the frequency range over which the sensor is accurate. Therefore, we hypothesize that the CNS improves on the reliability of the individual sensor outside this frequency range by using information from other sensors, a process referred to as “frequency completion.” Frequency completion uses internal models of sensory dynamics. This “improved” sensory signal is designated as the “sensory estimate” of the physical variable. Second, before being combined, information with different physical meanings is first transformed into a common representation; sensory estimates are converted to intermediate estimates. This conversion uses internal models of body dynamics and physical relationships. Third, several sensory systems may provide information about the same physical variable (e.g., semicircular canals and vision both measure self-rotation). Therefore, we hypothesize that the “central estimate” of a physical variable is computed as a weighted sum of all available intermediate estimates of this physical variable, a process referred to as “multicue weighted averaging.” The resulting central estimate is fed back to the first two layers. The sensory weighting model is applied to three-dimensional (3D) visual–vestibular interactions and their associated eye movements and perceptual responses. The model inputs are 3D angular and translational stimuli. The sensory inputs are the 3D sensory signals coming from the semicircular canals, otolith organs, and the visual system. The angular and translational components of visual movement are assumed to be available as separate stimuli measured by the visual system using retinal slip and image deformation. In addition, both tonic (“regular”) and phasic (“irregular”) otolithic afferents are implemented. Whereas neither tonic nor phasic otolithic afferents distinguish gravity from linear acceleration, the model uses tonic afferents to estimate gravity and phasic afferents to estimate linear acceleration. The model outputs are the internal estimates of physical motion variables and 3D slow-phase eye movements. The model also includes a smooth pursuit module. The model matches eye responses and perceptual effects measured during various motion paradigms in darkness (e.g., centered and eccentric yaw rotation about an earth-vertical axis, yaw rotation about an earth-horizontal axis) and with visual cues (e.g., stabilized visual stimulation or optokinetic stimulation). Received: 20 September 2000 / Accepted in revised form: 28 September 2001  相似文献   

6.
Insects use highly distributed nervous systems to process exteroception from head sensors, compare that information with state-based goals, and direct posture or locomotion toward those goals. To study how descending commands from brain centers produce coordinated, goal-directed motion in distributed nervous systems, we have constructed a conductance-based neural system for our robot MantisBot, a 29 degree-of-freedom, 13.3:1 scale praying mantis robot. Using the literature on mantis prey tracking and insect locomotion, we designed a hierarchical, distributed neural controller that establishes the goal, coordinates different joints, and executes prey-tracking motion. In our controller, brain networks perceive the location of prey and predict its future location, store this location in memory, and formulate descending commands for ballistic saccades like those seen in the animal. The descending commands are simple, indicating only 1) whether the robot should walk or stand still, and 2) the intended direction of motion. Each joint's controller uses the descending commands differently to alter sensory-motor interactions, changing the sensory pathways that coordinate the joints' central pattern generators into one cohesive motion. Experiments with one leg of MantisBot show that visual input produces simple descending commands that alter walking kinematics, change the walking direction in a predictable manner, enact reflex reversals when necessary, and can control both static posture and locomotion with the same network.  相似文献   

7.
 A technique for measuring the motion of a rigid, textured plane in the frontoparallel plane is developed and tested on synthetic and real image sequences. The parameters of motion – translation in two dimensions, and rotation about a previously unspecified axis perpendicular to the plane – are computed by a single-stage, non-iterative process which interpolates the position of the moving image with respect to a set of reference images. The method can be extended to measure additional parameters of motion, such as expansion or shear. Advantages of the technique are that it does not require tracking of features, measurement of local image velocities or computation of high-order spatial or temporal derivatives of the image. The technique is robust to noise, and it offers a simple, novel way of tackling the ‘aperture’ problem. An application to the computation of robot egomotion is also described. Received: 3 September 1993/Accepted in revised form: 16 April 1994  相似文献   

8.
We use neural networks with pointer map architectures to provide simple attentional processing in a robotic task. A pointer map comprises a map of neurons that encode a stimulus. Besides global feedback inhibition, the map receives feedback excitation via a small group of pointer neurons that encode the location of a salient stimulus on the map as a vectorial representation. The pointer neurons are able to apply selective processing to a particular region of the network. The robot uses these properties to manoeuver in relation to an attended object. We implemented a controller composed of two pointer maps, and a motor map. The first pointer map reports the direction of a salient obstacle in a one-dimensional map of distance derived from infrared sensors. The second pointer map reports the direction to potential obstacles in a two-dimensional edge-enhanced image derived from a forward looking CCD-camera. These outputs are applied to a motor map, where they bias the motor control signals issued to the robots wheels, according to navigational intentions.  相似文献   

9.
Engineers have a lot to gain from studying biology. The study of biological neural systems alone provides numerous examples of computational systems that are far more complex than any man-made system and perform real-time sensory and motor tasks in a manner that humbles the most advanced artificial systems. Despite the evolutionary genesis of these systems and the vast apparent differences between species, there are common design strategies employed by biological systems that span taxa, and engineers would do well to emulate these strategies. However, biologically-inspired computational architectures, which are continuous-time and parallel in nature, do not map well onto conventional processors, which are discrete-time and serial in operation. Rather, an implementation technology that is capable of directly realizing the layered parallel structure and nonlinear elements employed by neurobiology is required for power- and space-efficient implementation. Custom neuromorphic hardware meets these criteria and yields low-power dedicated sensory systems that are small, light, and ideal for autonomous robot applications. As examples of how this technology is applied, this article describes both a low-level neuromorphic hardware emulation of an elementary visual motion detector, and a large-scale, system-level spatial motion integration system.  相似文献   

10.
The visual system of the fly performs various computations on photoreceptor outputs. The detection and measurement of movement is based on simple nonlinear multiplication-like interactions between adjacent pairs and groups of photoreceptors. The position of a small contrasted object against a uniform background is measured, at least in part, by (formally) 1-input nonlinear flicker detectors. A fly can also detect and discriminate a figure that moves relative to a ground texture. This computation of relative movement relies on a more complex algorithm, one which detects discontinuities in the movement field. The experiments described in this paper indicate that the outputs of neighbouring movement detectors interact in a multiplication-like fashion and then in turn inhibit locally the flicker detectors. The following main characteristic properties (partly a direct consequence of the algorithm's structure) have been established experimentally: a) Coherent motion of figure and ground inhibit the position detectors whereas incoherent motion fails to produce inhibition near the edges of the moving figure (provided the textures of figure and ground are similar). b) The movement detectors underlying this particular computation are direction-insensitive at input frequencies (at the photoreceptor level) above 2.3 Hz. They become increasingly direction-sensitive for lower input frequencies. c) At higher input frequencies the fly cannot discriminate an object against a texture oscillating at the same frequency and amplitude at 0° and 180° phase, whereas 90° or 270° phase shift between figure and ground oscillations yields maximum discrimination. d) Under conditions of coherent movement, strong spatial incoherence is detected by the same mechanism. The algorithm underlying the relative movement computation is further discussed as an example of a coherence measuring process, operating on the outputs of an array of movement detectors. Possible neural correlates are also mentioned.  相似文献   

11.
 This study presents a computational framework that capitalizes on known human neuromechanical characteristics during limb movements in order to predict human–machine interactions. A parallel–distributed approach, the mixture of nonlinear models, fits the relationship between the measured kinematics and kinetics at the handle of a robot. Each element of the mixture represented the arm and its controller as a feedforward nonlinear model of inverse dynamics plus a linear approximation of musculotendonous impedance. We evaluated this approach with data from experiments where subjects held the handle of a planar manipulandum robot and attempted to make point-to-point reaching movements. We compared the performance to the more conventional approach of a constrained, nonlinear optimization of the parameters. The mixture of nonlinear models accounted for 79±11% (mean ±SD) of the variance in measured force, and force errors were 0.73 ± 0.20% of the maximum exerted force. Solutions were acquired in half the time with a significantly better fit. However, both approaches suffered equally from the simplifying assumptions, namely that the human neuromechanical system consisted of a feedforward controller coupled with linear impedances and a moving state equilibrium. Hence, predictability was best limited to the first half of the movement. The mixture of nonlinear models may be useful in human–machine tasks such as in telerobotics, fly-by-wire vehicles, robotic training, and rehabilitation. Received: 20 October 2000 / Accepted in revised form: 8 May 2001  相似文献   

12.
The motion energy sensor has been shown to account for a wide range of physiological and psychophysical results in motion detection and discrimination studies. It has become established as the standard computational model for retinal movement sensing in the human visual system. Adaptation effects have been extensively studied in the psychophysical literature on motion perception, and play a crucial role in theoretical debates, but the current implementation of the energy sensor does not provide directly for modelling adaptation-induced changes in output. We describe an extension of the model to incorporate changes in output due to adaptation. The extended model first computes a space-time representation of the output to a given stimulus, and then a RC gain-control circuit (“leaky integrator”) is applied to the time-dependent output. The output of the extended model shows effects which mirror those observed in psychophysical studies of motion adaptation: a decline in sensor output during stimulation, and changes in the relative of outputs of different sensors following this adaptation.  相似文献   

13.
Visual figures may be distinguished based on elementary motion or higher-order non-Fourier features, and flies track both. The canonical elementary motion detector, a compact computation for Fourier motion direction and amplitude, can also encode higher-order signals provided elaborate preprocessing. However, the way in which a fly tracks a moving figure containing both elementary and higher-order signals has not been investigated. Using a novel white noise approach, we demonstrate that (1) the composite response to an object containing both elementary motion (EM) and uncorrelated higher-order figure motion (FM) reflects the linear superposition of each component; (2) the EM-driven component is velocity-dependent, whereas the FM component is driven by retinal position; (3) retinotopic variation in EM and FM responses are different from one another; (4) the FM subsystem superimposes saccadic turns upon smooth pursuit; and (5) the two systems in combination are necessary and sufficient to predict the full range of figure tracking behaviors, including those that generate no EM cues at all. This analysis requires an extension of the model that fly motion vision is based on simple elementary motion detectors and provides a novel method to characterize the subsystems responsible for the pursuit of visual figures.  相似文献   

14.
The capability of grasping and lifting an object in a suitable, stable and controlled way is an outstanding feature for a robot, and thus far, one of the major problems to be solved in robotics. No robotic tools able to perform an advanced control of the grasp as, for instance, the human hand does, have been demonstrated to date. Due to its capital importance in science and in many applications, namely from biomedics to manufacturing, the issue has been matter of deep scientific investigations in both the field of neurophysiology and robotics. While the former is contributing with a profound understanding of the dynamics of real-time control of the slippage and grasp force in the human hand, the latter tries more and more to reproduce, or take inspiration by, the nature’s approach, by means of hardware and software technology. On this regard, one of the major constraints robotics has to overcome is the real-time processing of a large amounts of data generated by the tactile sensors while grasping, which poses serious problems to the available computational power. In this paper a bio-inspired approach to tactile data processing has been followed in order to design and test a hardware–software robotic architecture that works on the parallel processing of a large amount of tactile sensing signals. The working principle of the architecture bases on the cellular nonlinear/neural network (CNN) paradigm, while using both hand shape and spatial–temporal features obtained from an array of microfabricated force sensors, in order to control the sensory-motor coordination of the robotic system. Prototypical grasping tasks were selected to measure the system performances applied to a computer-interfaced robotic hand. Successful grasps of several objects, completely unknown to the robot, e.g. soft and deformable objects like plastic bottles, soft balls, and Japanese tofu, have been demonstrated.  相似文献   

15.
Underwater walking   总被引:1,自引:0,他引:1  
Lobsters are generalist decapods that evolved in a broad variety of niches in the Northwestern Atlantic. Due to their inherent buoyancy they have acquired adaptations to reduced traction and surge. We have developed a biomimetic robot based on the lobster that features artificial muscle actuators and sensors employing labeled-line codes. The central controller for this robot is based on the command neuron, coordinating neuron central pattern generator model. A library of commands is released by sensor feedback to mediate adaptive sequences and goal achieving behavior. Rheotaxic behaviors can mediate adaptations to achieve some of the advantages of the biological models.  相似文献   

16.
Miniature sensors that could measure forces applied by the fingers and hand without interfering with manual dexterity or range of motion would have considerable practical value in ergonomics and rehabilitation. In this study, techniques have been developed to use inexpensive pressure-sensing resistors (FSRs) to accurately measure compression force. The FSRs are converted from pressure-sensing to force-sensing devices. The effects of nonlinear response properties and dependence on loading history are compensated by signal conditioning and calibration. A fourth-order polynomial relating the applied force to the current voltage output and a linearly weighted sum of prior outputs corrects for sensor hysteresis and drift. It was found that prolonged (>20 h) shear force loading caused sensor gain to change by approximately 100%. Shear loading also had the effect of eliminating shear force effects on sensor output, albeit only in the direction of shear loading. By applying prolonged shear loading in two orthogonal directions, the sensors were converted into pure compression sensors. Such preloading of the sensor is, therefore, required prior to calibration. The error in compression force after prolonged shear loading and calibration was consistently <5% from 0 to 30 N and <10% from 30 to 40 N. This novel method of calibrating FSRs for measuring compression force provides an inexpensive tool for biomedical and industrial design applications where measurements of finger and hand force are needed.  相似文献   

17.
Hymenopteran insects perform systematic learning flights on departure from their nest, during which they acquire a visual representation of the nest environment. They back away from and pivot around the nest in a series of arcs while turning to view it in their fronto-lateral visual field. During the initial stages of the flights, turning rate and arc velocity relative to the nest are roughly constant at 100–200° s−1 and are independent of distance, since the insects increase their flight speed as they back away from the pivoting centre. In this paper I analyse how solitary wasps control their flight by having them perform learning flights inside a rotating striped drum. The wasps' turning velocity is under visual control. When the insects fly inside a drum that rotates around the nest as a centre, their average turning rate is faster than normal when they fly an arc into the direction of drum rotation and slower when they fly in the opposite direction. The average slip speed they experience lies within 100–200° s−1. The wasps also adjust their flight speed depending on the rotation of the drum. They modulate their distance from the pivoting centre accordingly and presumably also their height above ground, so that maximal ground slip is on average 200°␣s−1. The insects move along arcs by short pulses of translation, followed by rapid body turns to correct for the change in retinal position of the nest entrance. Saccadic body turns follow pulses of translation with a delay of 80–120 ms. The optomotor response is active during these turns. The control of pivoting flight most likely involves three position servos, to control the retinal position of both the azimuth and the altitude of nest and the direction of flight relative to it, and two velocity servos, one constituting the optomotor reflex and the other one serving to clamp ground slip at about 200° s−1. The control of ground slip is the prime source of the dynamic constancy of learning flights, which may help wasps to scale the pivoting parallax field they produce during these flights. Constant pivoting rate may in addition be important for the acquisition of a regular sequence of snapshots and in scanning for compass cues. Accepted : 31 July 1996  相似文献   

18.
 The visual homing abilities of insects can be explained by the snapshot hypothesis. It asserts that an animal is guided to a previously visited location by comparing the current view with a snapshot taken at that location. The average landmark vector (ALV) model is a parsimonious navigation model based on the snapshot hypothesis. According to this model, the target location is unambiguously characterized by a signature vector extracted from the snapshot image. This article provides threefold support for the ALV model by synthetic modeling. First, it was shown that a mobile robot using the ALV model returns to the target location with only small position errors. Second, the behavior of the robot resembled the behavior of bees in some experiments. And third, the ALV model was implemented on the robot in analog hardware. This adds validity to the ALV model, since analog electronic circuits share a number of information-processing principles with biological nervous systems; the analog implementation therefore provides suggestions for how visual homing abilities might be implemented in the insect's brain. Received: 15 June 1999 / Accepted in revised form: 20 March 2000  相似文献   

19.
An important role of visual systems is to detect nearby predators, prey, and potential mates, which may be distinguished in part by their motion. When an animal is at rest, an object moving in any direction may easily be detected by motion-sensitive visual circuits. During locomotion, however, this strategy is compromised because the observer must detect a moving object within the pattern of optic flow created by its own motion through the stationary background. However, objects that move creating back-to-front (regressive) motion may be unambiguously distinguished from stationary objects because forward locomotion creates only front-to-back (progressive) optic flow. Thus, moving animals should exhibit an enhanced sensitivity to regressively moving objects. We explicitly tested this hypothesis by constructing a simple fly-sized robot that was programmed to interact with a real fly. Our measurements indicate that whereas walking female flies freeze in response to a regressively moving object, they ignore a progressively moving one. Regressive motion salience also explains observations of behaviors exhibited by pairs of walking flies. Because the assumptions underlying the regressive motion salience hypothesis are general, we suspect that the behavior we have observed in Drosophila may be widespread among eyed, motile organisms.  相似文献   

20.
Workability and productivity of robotic plug transplanting workcell   总被引:3,自引:0,他引:3  
Summary Transplanting is a necessary operation in transplant production systems. Transplanting operation is labor-intensive and automation can reduce labor costs. Plugs are actively growing young transplants with two well-defined morphologic parts: the stem-leaf portion and the root-growth medium portion. They may be grown in regularly situated cells on traylike containers. This regularity makes plugs suitable for automated transplanting operations. It is, therefore, beneficial for in vitro plant propagation systems to include plugs as intermediate products before they are delivered to the greenhouses. Flexible automation and robotics technologies have been applied to develop a robotic workcell for transplanting plugs from plug trays to growing flats. Main components of the workcell include a robot, an end-effector, and two conveyer belts for transporting trays and flats. The end-effector for extracting, holding, and planting plugs is a “sliding-needles-with-sensor” gripper. The sensor signals the robot to complete a transplanting cycle only when a plug is properly held by the gripper. Systems analysis and computer simulation were conducted to study factors affecting workability and productivity of various workcell designs. These factors included: dimensions and kinematics of the robot and its peripheral equipment, layout and materials flow, fullness of plug trays, and successful extraction rate of plugs. The analysis also indicated that machine vision systems could add valuable capabilities to the workcell, such as robot guidance and plug quality evaluation. Engineering economic analysis was performed to investigate the interaction of workcell technical feasibility and economic viability. Presented in the Session-in-Depth Robotics in Tissue Culture at the 1991 World Congress on Cell and Tissue Culture, Anaheim, California, June 16–20, 1991.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号