首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we present a biologically inspired two-layered neural network for trajectory formation and obstacle avoidance. The two topographically ordered neural maps consist of analog neurons having continuous dynamics. The first layer, the sensory map, receives sensory information and builds up an activity pattern which contains the optimal solution (i.e. shortest path without collisions) for any given set of current position, target positions and obstacle positions. Targets and obstacles are allowed to move, in which case the activity pattern in the sensory map will change accordingly. The time evolution of the neural activity in the second layer, the motor map, results in a moving cluster of activity, which can be interpreted as a population vector. Through the feedforward connections between the two layers, input of the sensory map directs the movement of the cluster along the optimal path from the current position of the cluster to the target position. The smooth trajectory is the result of the intrinsic dynamics of the network only. No supervisor is required. The output of the motor map can be used for direct control of an autonomous system in a cluttered environment or for control of the actuators of a biological limb or robot manipulator. The system is able to reach a target even in the presence of an external perturbation. Computer simulations of a point robot and a multi-joint manipulator illustrate the theory.  相似文献   

2.
In recent years, several phenomenological dynamical models have been formulated that describe how perceptual variables are incorporated in the control of motor variables. We call these short-route models as they do not address how perception-action patterns might be constrained by the dynamical properties of the sensory, neural and musculoskeletal subsystems of the human action system. As an alternative, we advocate a long-route modelling approach in which the dynamics of these subsystems are explicitly addressed and integrated to reproduce interceptive actions. The approach is exemplified through a discussion of a recently developed model for interceptive actions consisting of a neural network architecture for the online generation of motor outflow commands, based on time-to-contact information and information about the relative positions and velocities of hand and ball. This network is shown to be consistent with both behavioural and neurophysiological data. Finally, some problems are discussed with regard to the question of how the motor outflow commands (i.e. the intended movement) might be modulated in view of the musculoskeletal dynamics.  相似文献   

3.
Comparison of human and humanoid robot control of upright stance   总被引:1,自引:0,他引:1  
There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to 1 Hz) dynamic characteristics of human stance control. These subsystems are (1) a “sensory integration” mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an “effort control” mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.  相似文献   

4.
The ability to learn the location of places in the world and to revisit them repeatedly is crucial for all aspects of animal life on earth. It underpins animal foraging, predator avoidance, territoriality, mating, nest construction and parental care. Much theoretical and experimental progress has recently been made in identifying the sensory cues and the computational mechanisms that allow insects (and robots) to find their way back to places, while the neurobiological mechanisms underlying navigational abilities are beginning to be unravelled in vertebrate and invertebrate models. Studying visual homing in insects is interesting, because they allow experimentation and view-reconstruction under natural conditions, because they are likely to have evolved parsimonious, yet robust solutions to the homing problem and because they force us to consider the viewpoint of navigating animals, including their sensory and computational capacities.  相似文献   

5.
As most sensory modalities, the visual system needs to deal with very fast changes in the environment. Instead of processing all sensory stimuli, the brain is able to construct a perceptual experience by combining selected sensory input with an ongoing internal activity. Thus, the study of visual perception needs to be approached by examining not only the physical properties of stimuli, but also the brain's ongoing dynamical states onto which these perturbations are imposed. At least three different models account for this internal dynamics. One model is based on cardinal cells where the activity of few cells by itself constitutes the neuronal correlate of perception, while a second model is based on a population coding that states that the neuronal correlate of perception requires distributed activity throughout many areas of the brain. A third proposition, known as the temporal correlation hypothesis states that the distributed neuronal populations that correlate with perception, are also defined by synchronization of the activity on a millisecond time scale. This would serve to encode contextual information by defining relations between the features of visual objects. If temporal properties of neural activity are important to establish the neural mechanisms of perception, then the study of appropriate dynamical stimuli should be instrumental to determine how these systems operate. The use of natural stimuli and natural behaviors such as free viewing, which features fast changes of internal brain states as seen by motor markers, is proposed as a new experimental paradigm to study visual perception.  相似文献   

6.
 The visual homing abilities of insects can be explained by the snapshot hypothesis. It asserts that an animal is guided to a previously visited location by comparing the current view with a snapshot taken at that location. The average landmark vector (ALV) model is a parsimonious navigation model based on the snapshot hypothesis. According to this model, the target location is unambiguously characterized by a signature vector extracted from the snapshot image. This article provides threefold support for the ALV model by synthetic modeling. First, it was shown that a mobile robot using the ALV model returns to the target location with only small position errors. Second, the behavior of the robot resembled the behavior of bees in some experiments. And third, the ALV model was implemented on the robot in analog hardware. This adds validity to the ALV model, since analog electronic circuits share a number of information-processing principles with biological nervous systems; the analog implementation therefore provides suggestions for how visual homing abilities might be implemented in the insect's brain. Received: 15 June 1999 / Accepted in revised form: 20 March 2000  相似文献   

7.
In this work, based on behavioural and dynamical evidence, a study of simulated agents with the capacity to change feedback from their bodies to accomplish a one-legged walking task is proposed to understand the emergence of coupled dynamics for robust behaviour. Agents evolve with evolutionary-defined biases that modify incoming body signals (sensory offsets). Analyses on whether these agents show further dependence to their environmental coupled dynamics than others with no feedback control is described in this article. The ability to sustain behaviours is tested during lifetime experiments with mutational and sensory perturbations after evolution. Using dynamical systems analysis, this work identifies conditions for the emergence of dynamical mechanisms that remain functional despite sensory perturbations. Results indicate that evolved agents with evolvable sensory offset depends not only on where in neural space the state of the neural system operates, but also on the transients to which the inner-system was being driven by sensory signals from its interactions with the environment, controller, and agent body. Experimental evidence here leads discussions on a dynamical systems perspective on behavioural robustness that goes beyond attractors of controller phase space.  相似文献   

8.
Human subjects standing in a sinusoidally moving visual environment display postural sway with characteristic dynamical properties. We analyzed the spatiotemporal properties of this sway in an experiment in which the frequency of the visual motion was varied. We found a constant gain near 1, which implies that the sway motion matches the spatial parameters of the visual motion for a large range of frequencies. A linear dynamical model with constant parameters was compared quantitatively with the data. Its failure to describe correctly the spatiotemporal properties of the system led us to consider adaptive and nonlinear models. To differentiate between possible alternative structures we directly fitted nonlinear differential equations to the sway and visual motion trajectories on a trial-by-trial basis. We found that the eigenfrequency of the fitted model adapts strongly to the visual motion frequency. The damping coefficient decreases with increasing frequency. This indicates that the system destabilizes its postural state in the inertial frame. This leads to a faster internal dynamics which is capable of synchronizing posture with fast-moving visual environments. Using an algorithm which allows the identification of essentially nonlinear terms of the dynamics we found small nonlinear contributions. These nonlinearities are not consistent with a limit-cycle dynamics, accounting for the robustness of the amplitude of postural sway against frequency variations. We interpret our results in terms of active generation of postural sway specified by sensory information. We derive also a number of conclusions for a behavior-oriented analysis of the postural system.  相似文献   

9.
The four rhopalia of cubomedusae are integrated parts of the central nervous system carrying their many eyes and thought to be the centres of visual information processing. Rhopalial pacemakers control locomotion through a complex neural signal transmitted to the ring nerve and the signal frequency is modulated by the visual input. Since electrical synapses have never been found in the cubozoan nervous system all signals are thought to be transmitted across chemical synapses, and so far information about the neurotransmitters involved are based on immunocytochemical or behavioural data. Here we present the first direct physiological evidence for the types of neurotransmitters involved in sensory information processing in the rhopalial nervous system. FMRFamide, serotonin and dopamine are shown to have inhibitory effect on the pacemaker frequency. There are some indications that the fast acting acetylcholine and glycine have an initial effect and then rapidly desensitise. Other tested neuroactive compounds (GABA, glutamate, and taurine) could not be shown to have a significant effect.  相似文献   

10.
To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.  相似文献   

11.
Most conventional robots rely on controlling the location of the center of pressure to maintain balance, relying mainly on foot pressure sensors for information. By contrast, humans rely on sensory data from multiple sources, including proprioceptive, visual, and vestibular sources. Several models have been developed to explain how humans reconcile information from disparate sources to form a stable sense of balance. These models may be useful for developing robots that are able to maintain dynamic balance more readily using multiple sensory sources. Since these information sources may conflict, reliance by the nervous system on any one channel can lead to ambiguity in the system state. In humans, experiments that create conflicts between different sensory channels by moving the visual field or the support surface indicate that sensory information is adaptively reweighted. Unreliable information is rapidly down-weighted, then gradually up-weighted when it becomes valid again. Human balance can also be studied by building robots that model features of human bodies and testing them under similar experimental conditions. We implement a sensory reweighting model based on an adaptive Kalman filter in a bipedal robot, and subject it to sensory tests similar to those used on human subjects. Unlike other implementations of sensory reweighting in robots, our implementation includes vision, by using optic flow to calculate forward rotation using a camera (visual modality), as well as a three-axis gyro to represent the vestibular system (non-visual modality), and foot pressure sensors (proprioceptive modality). Our model estimates measurement noise in real time, which is then used to recompute the Kalman gain on each iteration, improving the ability of the robot to dynamically balance. We observe that we can duplicate many important features of postural sway in humans, including automatic sensory reweighting, effects, constant phase with respect to amplitude, and a temporal asymmetry in the reweighting gains.  相似文献   

12.
The character of evoked potentials (EPs) dynamics to signal light stimulus during elaboration of avoidance reaction, allows to assert that during formation of adaptive activity functional balance is established of sensory and integrative-triggering brain parts or functional balance of "motor" and "sensory" integration regimes. Each of the studied subcortical structures is characterized by simultaneous but specific functioning both in the motor and sensory regimes; such conclusion is based on different dynamics of their EPs parameters: the changes of ones correspond to EPs dynamics in the visual cortical area, of the others--in the motor area. During chronic haloperidol administration, the reorganizations of intercentral relations are observed in 10--12 days after the beginning of drug administration. They may be considered as a succession of disturbances of functional balance between "sensory" and "motor" integration regimes: at first the sensory regime domination appears in which subcortical structures are chiefly and uniformly involved (ncd, pall and n. acc.); "motor" regime is weakened; then, as a result, a distortion of the "motor" regime of integration takes place. In this case a bradykinesia is developed.  相似文献   

13.
 We explore the use of continuous-time analog very-large-scale-integrated (aVLSI) neuromorphic visual preprocessors together with a robotic platform in generating bio-inspired behaviors. Both the aVLSI motion sensors and the robot behaviors described in this work are inspired by the motion computation in the fly visual system and two different fly behaviors. In most robotic systems, the visual information comes from serially scanned imagers. This restricts the form of computation of the visual image and slows down the input rate to the controller system of the robot, hence increasing the reaction time of the robot. These aVLSI neuromorphic sensors reduce the computational load and power consumption of the robot, thus making it possible to explore continuous-time visuomotor control systems that react in real-time to the environment. The motion sensor provides two outputs: one for the preferred direction and the other for the null direction. These motion outputs are created from the aggregation of six elementary motion detectors that implement a variant of Reichardt's correlation algorithm. The four analog continuous-time outputs from the motion chips go to the control system on the robot which generates a mixture of two behaviors – course stabilization and fixation – from the outputs of these sensors. Since there are only four outputs, the amount of information transmitted to the controller is reduced (as compared to using a CCD sensor), and the reaction time of the robot is greatly decreased. In this work, the robot samples the motion sensors every 3.3 ms during the behavioral experiments. Received: 4 October 1999 / Accepted in revised form: 26 April 2001  相似文献   

14.
Originating from a viewpoint that complex/chaotic dynamics would play an important role in biological system including brains, chaotic dynamics introduced in a recurrent neural network was applied to control. The results of computer experiment was successfully implemented into a novel autonomous roving robot, which can only catch rough target information with uncertainty by a few sensors. It was employed to solve practical two-dimensional mazes using adaptive neural dynamics generated by the recurrent neural network in which four prototype simple motions are embedded. Adaptive switching of a system parameter in the neural network results in stationary motion or chaotic motion depending on dynamical situations. The results of hardware implementation and practical experiment using it show that, in given two-dimensional mazes, the robot can successfully avoid obstacles and reach the target. Therefore, we believe that chaotic dynamics has novel potential capability in controlling, and could be utilized to practical engineering application.  相似文献   

15.
We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye–arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye–arm–hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.  相似文献   

16.
Arthropods exhibit highly efficient solutions to sensorimotor navigation problems. They thus provide a source of inspiration and ideas to robotics researchers. At the same time, attempting to re-engineer these mechanisms in robot hardware and software provides useful insights into how the natural systems might work. This paper reviews three examples of arthropod sensorimotor control systems that have been implemented and tested on robots. First we discuss visual control mechanisms of flies, such as the optomotor reflex and collision avoidance, that have been replicated in analog VLSI (very large scale integration) hardware and used to produce corrective behavior in robot vehicles. Then, we present a robot model of auditory localization in the cricket; and discuss integration of this behavior with the optomotor behavior previously described. Finally we present a model of olfactory search in the moth, which makes use of several sensory cues, and has also been tested using robot hardware. We discuss some of the similarities and differences of the solutions obtained.  相似文献   

17.
Homing is the process by which an autonomous system guides itself to a particular location on the basis of sensory input. In this paper, a method of visual homing using an associative memory based on a simple pattern classifier is described. Homing is accomplished without the use of an explicit world model by utilizing direct associations between learned visual patterns and system motor commands. The method is analyzed in terms of a pattern space and conditions obtained that allow the system performance to be predicted on the basis of statistical measurements on the environment. Results of experiments utilizing the method to guide a robot-mounted camera in a three-dimensional environment are presented.  相似文献   

18.
Experimental studies have shown that responses of ventral intraparietal area (VIP) neurons specialize in head movements and the environment near the head. VIP neurons respond to visual, auditory, and tactile stimuli, smooth pursuit eye movements, and passive and active movements of the head. This study demonstrates mathematical structure on a higher organizational level created within VIP by the integration of a complete set of variables covering face-infringement. Rather than positing dynamics in an a priori defined coordinate system such as those of physical space, we assemble neuronal receptive fields to find out what space of variables VIP neurons together cover. Section 1 presents a view of neurons as multidimensional mathematical objects. Each VIP neuron occupies or is responsive to a region in a sensorimotor phase space, thus unifying variables relevant to the disparate sensory modalities and movements. Convergence on one neuron joins variables functionally, as space and time are joined in relativistic physics to form a unified spacetime. The space of position and motion together forms a neuronal phase space, bridging neurophysiology and the physics of face-infringement. After a brief review of the experimental literature, the neuronal phase space natural to VIP is sequentially characterized, based on experimental data. Responses of neurons indicate variables that may serve as axes of neural reference frames, and neuronal responses have been so used in this study. The space of sensory and movement variables covered by VIP receptive fields joins visual and auditory space to body-bound sensory modalities: somatosensation and the inertial senses. This joining of allocentric and egocentric modalities is in keeping with the known relationship of the parietal lobe to the sense of self in space and to hemineglect, in both humans and monkeys. Following this inductive step, variables are formalized in terms of the mathematics of graph theory to deduce which combinations are complete as a multidimensional neural structure that provides the organism with a complete set of options regarding objects impacting the face, such as acceptance, pursuit, and avoidance. We consider four basic variable types: position and motion of the face and of an external object. Formalizing the four types of variables allows us to generalize to any sensory system and to determine the necessary and sufficient conditions for a neural center (for example, a cortical region) to provide a face-infringement space. We demonstrate that VIP includes at least one such face-infringement space.  相似文献   

19.
Several studies have shown that humans track a moving visual target with their eyes better if the movement of this target is directly controlled by the observer's hand. The improvement in performance has been attributed to coordination control between the arm motor system and the smooth pursuit (SP) system. In such a task, the SP system shows characteristics that differ from those observed during eye-alone tracking: latency (between the target-arm and the eye motion onsets) is shorter, maximum SP velocity is higher and the maximum target motion frequency at which the SP can function effectively is also higher. The aim of this article is to qualitatively evaluate the behavior of a dynamical model simulating the oculomotor system and the arm motor system when both are involved in tracking visual targets. The evaluation is essentially based on a comparison of the behavior of the model with the behavior of human subjects tracking visual targets under different conditions. The model has been introduced and quantitatively evaluated in a companion paper. The model is based on an exchange of internal information between the two sensorimotor systems, mediated by sensory signals (vision, arm muscle proprioception) and motor signals (arm motor command copy). The exchange is achieved by a specialized structure of the central nervous system, previously identified as a part of the cerebellum. Computer simulation of the model yielded results that fit the behavior of human subjects observed during previously reported experiments, both qualitatively and quantitatively. The parallelism between physiology and human behavior on the one hand, and structure and simulation of the model on the other hand, is discussed. Received: 6 March 1997 / Accepted in revised form: 15 July 1997  相似文献   

20.
 This article describes an expanded version of a previously proposed motor control scheme, based on rules for combining sensory and motor signals within the central nervous system. Classical control elements of the previous cybernetic circuit were replaced by artificial neural network modules having an architecture based on the connectivity of the cerebellar cortex, and whose functioning is regulated by reinforcement learning. The resulting model was then applied to the motion control of a mechanical, single-joint robot arm actuated by two McKibben artificial muscles. Various biologically plausible learning schemes were studied using both simulations and experiments. After learning, the model was able to accurately pilot the movements of the robot arm, both in velocity and position. Received: 4 September 2000 / Accepted in revised form: 7 November 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号