首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our laboratory investigates how animals acquire sensory data to understand the neural computations that permit complex sensorimotor behaviors. We use the rat whisker system as a model to study active tactile sensing; our aim is to quantitatively describe the spatiotemporal structure of incoming sensory information to place constraints on subsequent neural encoding and processing. In the first part of this paper we describe the steps in the development of a hardware model (a 'sensobot') of the rat whisker array that can perform object feature extraction. We show how this model provides insights into the neurophysiology and behavior of the real animal. In the second part of this paper, we suggest that sensory data acquisition across the whisker array can be quantified using the complete derivative. We use the example of wall-following behavior to illustrate that computing the appropriate spatial gradients across a sensor array would enable an animal or mobile robot to predict the sensory data that will be acquired at the next time step.  相似文献   

2.
The use of mobile robots is an effective method of validating sensory–motor models of animals in a real environment. The well-identified insect sensory–motor systems have been the major targets for modeling. Furthermore, mobile robots implemented with such insect models attract engineers who aim to avail advantages from organisms. However, directly comparing the robots with real insects is still difficult, even if we successfully model the biological systems, because of the physical differences between them. We developed a hybrid robot to bridge the gap. This hybrid robot is an insect-controlled robot, in which a tethered male silkmoth (Bombyx mori) drives the robot in order to localize an odor source. This robot has the following three advantages: 1) from a biomimetic perspective, the robot enables us to evaluate the potential performance of future insect-mimetic robots; 2) from a biological perspective, the robot enables us to manipulate the closed-loop of an onboard insect for further understanding of its sensory–motor system; and 3) the robot enables comparison with insect models as a reference biological system. In this paper, we review the recent works regarding insect-controlled robots and discuss the significance for both engineering and biology.  相似文献   

3.
A robot navigating in an unstructured environment needs to avoid obstacles in its way and determine free spaces through which it can safely pass. We present here a set of optical-flow-based behaviors that allow a robot moving on a ground plane to perform these tasks. The behaviors operate on a purposive representation of the environment called the “virtual corridor” which is computed as follows: the images captured by a forward-facing camera rigidly attached to the robot are first remapped using a space-variant transformation. Then, optical flow is computed from the remapped image stream. Finally, the virtual corridor is extracted from the optical flow by applying simple but robust statistics. The introduction of a space-variant image preprocessing stage is inspired by biological sensory processing, where the projection and remapping of a sensory input field onto higher-level cortical areas represents a central processing mechanism. Such transformations lead to a significant data reduction, making real-time execution possible. Additionally, they serve to “re-present” the sensory data in terms of ecologically relevant features, thereby simplifying the interpretation by subsequent processing stages. In accordance with these biological principles we have designed a space-variant image transformation, called the polar sector map, which is ideally suited to the navigational task. We have validated our design with simulations in synthetic environments and in experiments with real robots. Received: 1 July 1999 / Accepted in revised form: 20 March 2000  相似文献   

4.
Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.  相似文献   

5.
Arthropods exhibit highly efficient solutions to sensorimotor navigation problems. They thus provide a source of inspiration and ideas to robotics researchers. At the same time, attempting to re-engineer these mechanisms in robot hardware and software provides useful insights into how the natural systems might work. This paper reviews three examples of arthropod sensorimotor control systems that have been implemented and tested on robots. First we discuss visual control mechanisms of flies, such as the optomotor reflex and collision avoidance, that have been replicated in analog VLSI (very large scale integration) hardware and used to produce corrective behavior in robot vehicles. Then, we present a robot model of auditory localization in the cricket; and discuss integration of this behavior with the optomotor behavior previously described. Finally we present a model of olfactory search in the moth, which makes use of several sensory cues, and has also been tested using robot hardware. We discuss some of the similarities and differences of the solutions obtained.  相似文献   

6.
Comparison of human and humanoid robot control of upright stance   总被引:1,自引:0,他引:1  
There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to 1 Hz) dynamic characteristics of human stance control. These subsystems are (1) a “sensory integration” mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an “effort control” mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.  相似文献   

7.
In this article, we present a neurologically motivated computational architecture for visual information processing. The computational architecture’s focus lies in multiple strategies: hierarchical processing, parallel and concurrent processing, and modularity. The architecture is modular and expandable in both hardware and software, so that it can also cope with multisensory integrations – making it an ideal tool for validating and applying computational neuroscience models in real time under real-world conditions. We apply our architecture in real time to validate a long-standing biologically inspired visual object recognition model, HMAX. In this context, the overall aim is to supply a humanoid robot with the ability to perceive and understand its environment with a focus on the active aspect of real-time spatiotemporal visual processing. We show that our approach is capable of simulating information processing in the visual cortex in real time and that our entropy-adaptive modification of HMAX has a higher efficiency and classification performance than the standard model (up to \(\sim \!+6\,\% \) ).  相似文献   

8.
眼睛注视作为一种重要的非语言社会线索,不仅可以传达他人丰富的注意方向信息,并且能够诱发独特的社会性注意行为. 近年来,研究者利用改编的社会性注意任务发现,眼睛注视线索还可以进一步影响我们对各种不同种类物体(用具、符号、面孔等)的感知觉加工,以及主观评价、记忆等其他高级认知加工过程. 眼睛注视线索对物体加工的这种影响受到诸多因素的调节,如面孔属性、数量以及注视模式等. 特别地,眼睛注视线索对物体加工的这一调制作用能够在无意识水平发生,具有一定的特异性. 此外,针对这一调制作用背后机制的研究暗示心理理论和观点采择可能参与其中,但仍有待进一步探究. 眼睛注视对物体加工影响的研究有助于我们深入理解社会互动的方式以及人类与环境的交互过程,因此具有重要的理论意义和社会应用价值.  相似文献   

9.
The control of behaviour is usually understood in terms of three distinct components: sensory processing, decision making and movement control. Recently, this view has been questioned on the basis of physiological and behavioural data, blurring the distinction between these three stages. This raises the question to what extent the motor system itself can contribute to the interpretation of behavioural situations. To investigate this question we use a neural model of sensory motor integration applied to a behaving mobile robot performing a navigation task. We show that the population response of the motor system provides a substrate for the categorization of behavioural situations. This categorization allows for the assessment of the complexity of a behavioural situation and regulates whether higher-level decision making is required to resolve behavioural conflicts. Our model lends credence to an emerging reconceptualization of behavioural control where the motor system can be considered as part of a high-level perceptual system.  相似文献   

10.
In this work, based on behavioural and dynamical evidence, a study of simulated agents with the capacity to change feedback from their bodies to accomplish a one-legged walking task is proposed to understand the emergence of coupled dynamics for robust behaviour. Agents evolve with evolutionary-defined biases that modify incoming body signals (sensory offsets). Analyses on whether these agents show further dependence to their environmental coupled dynamics than others with no feedback control is described in this article. The ability to sustain behaviours is tested during lifetime experiments with mutational and sensory perturbations after evolution. Using dynamical systems analysis, this work identifies conditions for the emergence of dynamical mechanisms that remain functional despite sensory perturbations. Results indicate that evolved agents with evolvable sensory offset depends not only on where in neural space the state of the neural system operates, but also on the transients to which the inner-system was being driven by sensory signals from its interactions with the environment, controller, and agent body. Experimental evidence here leads discussions on a dynamical systems perspective on behavioural robustness that goes beyond attractors of controller phase space.  相似文献   

11.
精神分裂症患者普遍存在视觉信息处理异常,这些视知觉功能紊乱涉及视通路的高级以及低级视区,表明在部分精神分裂症患者中,视觉系统早期或晚期的不同信息处理阶段均可能存在损伤.阐明这些感知觉信息处理紊乱的神经机制对理解精神分裂症神经病理生理学机制有重大意义.视觉周边抑制(surround suppression)是一种广泛存在的视觉现象,指在神经生理水平或视知觉水平上外周对中央视觉目标的抑制作用.精神分裂症的视觉周边抑制发生异常改变,然而其损伤状况并不完全一致,且其具体神经机制目前仍不清楚.本文以周边抑制为对象,从精神分裂症周边抑制改变状况及其神经机制两个层面简述了国内外精神分裂症视觉周边抑制的研究进展.未来研究方向需要系统全面地调查精神分裂症周边抑制损伤状况,综合脑科学研究技术共同探究精神分裂症患者周边抑制异常的具体神经环路.  相似文献   

12.
Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception.  相似文献   

13.
To elucidate the dynamic information processing in a brain underlying adaptive behavior, it is necessary to understand the behavior and corresponding neural activities. This requires animals which have clear relationships between behavior and corresponding neural activities. Insects are precisely such animals and one of the adaptive behaviors of insects is high-accuracy odor source orientation. The most direct way to know the relationships between neural activity and behavior is by recording neural activities in a brain from freely behaving insects. There is also a method to give stimuli mimicking the natural environment to tethered insects allowing insects to walk or fly at the same position. In addition to these methods an ‘insect–machine hybrid system’ is proposed, which is another experimental system meeting the conditions necessary for approaching the dynamic processing in the brain of insects for generating adaptive behavior. This insect–machine hybrid system is an experimental system which has a mobile robot as its body. The robot is controlled by the insect through its behavior or the neural activities recorded from the brain. As we can arbitrarily control the motor output of the robot, we can intervene at the relationship between the insect and the environmental conditions.  相似文献   

14.
We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain''s remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.  相似文献   

15.
The cerebellum is thought to implement internal models for sensory prediction, but details of the underlying circuitry are currently obscure. We therefore investigated a specific example of internal-model based sensory prediction, namely detection of whisker contacts during whisking. Inputs from the vibrissae in rats can be affected by signals generated by whisker movement, a phenomenon also observable in whisking robots. Robot novelty-detection can be improved by adaptive noise-cancellation, in which an adaptive filter learns a forward model of the whisker plant that allows the sensory effects of whisking to be predicted and thus subtracted from the noisy sensory input. However, the forward model only uses information from an efference copy of the whisking commands. Here we show that the addition of sensory information from the whiskers allows the adaptive filter to learn a more complex internal model that performs more robustly than the forward model, particularly when the whisking-induced interference has a periodic structure. We then propose a neural equivalent of the circuitry required for adaptive novelty-detection in the robot, in which the role of the adaptive filter is carried out by the cerebellum, with the comparison of its output (an estimate of the self-induced interference) and the original vibrissal signal occurring in the superior colliculus, a structure noted for its central role in novelty detection. This proposal makes a specific prediction concerning the whisker-related functions of a region in cerebellar cortical zone A(2) that in rats receives climbing fibre input from the superior colliculus (via the inferior olive). This region has not been observed in non-whisking animals such as cats and primates, and its functional role in vibrissal processing has hitherto remained mysterious. Further investigation of this system may throw light on how cerebellar-based internal models could be used in broader sensory, motor and cognitive contexts.  相似文献   

16.
Little is known about the brain mechanisms involved in word learning during infancy and in second language acquisition and about the way these new words become stable representations that sustain language processing. In several studies we have adopted the human simulation perspective, studying the effects of brain-lesions and combining different neuroimaging techniques such as event-related potentials and functional magnetic resonance imaging in order to examine the language learning (LL) process. In the present article, we review this evidence focusing on how different brain signatures relate to (i) the extraction of words from speech, (ii) the discovery of their embedded grammatical structure, and (iii) how meaning derived from verbal contexts can inform us about the cognitive mechanisms underlying the learning process. We compile these findings and frame them into an integrative neurophysiological model that tries to delineate the major neural networks that might be involved in the initial stages of LL. Finally, we propose that LL simulations can help us to understand natural language processing and how the recovery from language disorders in infants and adults can be accomplished.  相似文献   

17.
The role of embodied mechanisms in processing sentences endowed with a first person perspective is now widely accepted. However, whether embodied sentence processing within a third person perspective would also have motor behavioral significance remains unknown. Here, we developed a novel version of the Action-sentence Compatibility Effect (ACE) in which participants were asked to perform a movement compatible or not with the direction embedded in a sentence having a first person (Experiment 1: You gave a pizza to Louis) or third person perspective (Experiment 2: Lea gave a pizza to Louis). Results indicate that shifting perspective from first to third person was sufficient to prevent motor embodied mechanisms, abolishing the ACE. Critically, ACE was restored in Experiment 3 by adding a virtual "body" that allowed participants to know "where" to put themselves in space when taking the third person perspective, thus demonstrating that motor embodied processes are space-dependent. A fourth, control experiment, by dissociating motor response from the transfer verb's direction, supported the conclusion that perspective-taking may induce significant ACE only when coupled with the adequate sentence-response mapping.  相似文献   

18.

Background

A significant body of literature is devoted to modeling developmental mechanisms that create patterns within groups of initially equivalent embryonic cells. Although it is clear that these mechanisms do not function in isolation, the timing of and interactions between these mechanisms during embryogenesis is not well known. In this work, a computational approach was taken to understand how lateral inhibition, differential adhesion and programmed cell death can interact to create a mosaic pattern of biologically realistic primary and secondary cells, such as that formed by sensory (primary) and supporting (secondary) cells of the developing chick inner ear epithelium.

Results

Four different models that interlaced cellular patterning mechanisms in a variety of ways were examined and their output compared to the mosaic of sensory and supporting cells that develops in the chick inner ear sensory epithelium. The results show that: 1) no single patterning mechanism can create a 2-dimensional mosaic pattern of the regularity seen in the chick inner ear; 2) cell death was essential to generate the most regular mosaics, even through extensive cell death has not been reported for the developing basilar papilla; 3) a model that includes an iterative loop of lateral inhibition, programmed cell death and cell rearrangements driven by differential adhesion created mosaics of primary and secondary cells that are more regular than the basilar papilla; 4) this same model was much more robust to changes in homo- and heterotypic cell-cell adhesive differences than models that considered either fewer patterning mechanisms or single rather than iterative use of each mechanism.

Conclusion

Patterning the embryo requires collaboration between multiple mechanisms that operate iteratively. Interlacing these mechanisms into feedback loops not only refines the output patterns, but also increases the robustness of patterning to varying initial cell states.  相似文献   

19.

Background

Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process.

Principal Findings

Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation.

Conclusion

Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.  相似文献   

20.
Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants'' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants'' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans'' beliefs about the observed scene influence sensory processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号