首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Williams H  Noble J 《Bio Systems》2007,87(2-3):252-259
Continuous-time recurrent neural networks (CTRNNs) are potentially an excellent substrate for the generation of adaptive behaviour in artificial autonomous agents. However, node saturation effects in these networks can leave them insensitive to input and stop signals from propagating. Node saturation is related to the problems of hyper-excitation and quiescence in biological nervous systems, which are thought to be avoided through the existence of homeostatic plastic mechanisms. Analogous mechanisms are here implemented in a variety of CTRNN architectures and are shown to increase node sensitivity and improve signal propagation, with implications for robotics. These results lend support to the view that homeostatic plasticity may prevent quiescence and hyper-excitation in biological nervous systems.  相似文献   

2.
Summary This paper presents recent work in computational modelling of diffusing gaseous neuromodulators in biological nervous systems. A variety of interesting and significant properties of such four dimensional neural signalling systems are demonstrated. It is shown that the morphology of the neuromodulator source plays a highly significant role in the diffusion patterns observed. The paper goes on to describe work in adaptive autonomous systems directly inspired by this: an exploration of the use of virtual diffusing modulators in robot nervous systems built from non-standard artificial neural networks. These virtual chemicals act over space and time modulating a variety of node and connection properties in the networks. A wide variety of rich dynamics are possible in such systems; in the work described here, evolutionary robotics techniques have been used to harness the dynamics to produce autonomous behaviour in mobile robots. Detailed comparative analyses of evolutionary searches, and search spaces, for robot controllers with and without the virtual gases are introduced. The virtual diffusing modulators are found to provide significant advantages.  相似文献   

3.
The acts of learning and memory are thought to emerge from the modifications of synaptic connections between neurons, as guided by sensory feedback during behavior. However, much is unknown about how such synaptic processes can sculpt and are sculpted by neuronal population dynamics and an interaction with the environment. Here, we embodied a simulated network, inspired by dissociated cortical neuronal cultures, with an artificial animal (an animat) through a sensory-motor loop consisting of structured stimuli, detailed activity metrics incorporating spatial information, and an adaptive training algorithm that takes advantage of spike timing dependent plasticity. By using our design, we demonstrated that the network was capable of learning associations between multiple sensory inputs and motor outputs, and the animat was able to adapt to a new sensory mapping to restore its goal behavior: move toward and stay within a user-defined area. We further showed that successful learning required proper selections of stimuli to encode sensory inputs and a variety of training stimuli with adaptive selection contingent on the animat's behavior. We also found that an individual network had the flexibility to achieve different multi-task goals, and the same goal behavior could be exhibited with different sets of network synaptic strengths. While lacking the characteristic layered structure of in vivo cortical tissue, the biologically inspired simulated networks could tune their activity in behaviorally relevant manners, demonstrating that leaky integrate-and-fire neural networks have an innate ability to process information. This closed-loop hybrid system is a useful tool to study the network properties intermediating synaptic plasticity and behavioral adaptation. The training algorithm provides a stepping stone towards designing future control systems, whether with artificial neural networks or biological animats themselves.  相似文献   

4.
In biological systems, instead of actual encoders at different joints, proprioception signals are acquired through distributed receptive fields. In robotics, a single and accurate sensor output per link (encoder) is commonly used to track the position and the velocity. Interfacing bio-inspired control systems with spiking neural networks emulating the cerebellum with conventional robots is not a straight forward task. Therefore, it is necessary to adapt this one-dimensional measure (encoder output) into a multidimensional space (inputs for a spiking neural network) to connect, for instance, the spiking cerebellar architecture; i.e. a translation from an analog space into a distributed population coding in terms of spikes. This paper analyzes how evolved receptive fields (optimized towards information transmission) can efficiently generate a sensorimotor representation that facilitates its discrimination from other "sensorimotor states". This can be seen as an abstraction of the Cuneate Nucleus (CN) functionality in a robot-arm scenario. We model the CN as a spiking neuron population coding in time according to the response of mechanoreceptors during a multi-joint movement in a robot joint space. An encoding scheme that takes into account the relative spiking time of the signals propagating from peripheral nerve fibers to second-order somatosensory neurons is proposed. Due to the enormous number of possible encodings, we have applied an evolutionary algorithm to evolve the sensory receptive field representation from random to optimized encoding. Following the nature-inspired analogy, evolved configurations have shown to outperform simple hand-tuned configurations and other homogenized configurations based on the solution provided by the optimization engine (evolutionary algorithm). We have used artificial evolutionary engines as the optimization tool to circumvent nonlinearity responses in receptive fields.  相似文献   

5.
Summary Morphology plays an important role in the computational properties of neural systems, affecting both their functionality and the way in which this functionality is developed during life. In computer-based models of neural networks, artificial evolution is often used as a method to explore the space of suitable morphologies. In this paper we critically review the most common methods used to evolve neural morphologies and argue that a more effective, and possibly biologically plausible, method consists of genetically encoding rules of synaptic plasticity along with rules of neural morphogenesis. Some preliminary experiments with autonomous robots are described in order to show the feasibility and advantages of the approach.  相似文献   

6.
The brain of a honeybee contains only 960,000 neurons and its volume represents only 1 mm3. However, it supports impressive behavioral capabilities. Honeybees are equipped with sophisticated sensory systems and have well developed learning and memory capacities, whose essential mechanisms do not differ drastically from those of vertebrates. Here, I focus on non-elemental forms of learning by honeybees. I show that bees exhibit learning abilities that have been traditionally ascribed to a restricted portion of vertebrates, as they go beyond simple stimulus-stimulus or response-stimulus associations. To relate these abilities to neural structures and functioning in the bee brain we focus on the antennal lobes and the mushroom bodies. We conclude that there is a fair chance to understand complex behavior in bees, and to identify the potential neural substrates underlying such behavior by adopting a cognitive neuroethological approach. In such an approach, behavioral and neurobiological studies are combined to understand the rules and mechanisms of plastic behavior in a natural context.  相似文献   

7.
J D Hirst  M J Sternberg 《Biochemistry》1992,31(32):7211-7218
The applications of artificial neural networks to the prediction of structural and functional features of protein and nucleic acid sequences are reviewed. A brief introduction to neural networks is given, including a discussion of learning algorithms and sequence encoding. The protein applications mostly involve the prediction of secondary and tertiary structure from sequence. The problems in nucleic acid analysis tackled by neural networks are the prediction of translation initiation sites in Escherichia coli, the recognition of splice junctions in human mRNA, and the prediction of promoter sites in E. coli. The performance of the approach is compared with other current statistical methods.  相似文献   

8.
Humans are apt at recognizing patterns and discovering even abstract features which are sometimes embedded therein. Our ability to use the banknotes in circulation for business transactions lies in the effortlessness with which we can recognize the different banknote denominations after seeing them over a period of time. More significant is that we can usually recognize these banknote denominations irrespective of what parts of the banknotes are exposed to us visually. Furthermore, our recognition ability is largely unaffected even when these banknotes are partially occluded. In a similar analogy, the robustness of intelligent systems to perform the task of banknote recognition should not collapse under some minimum level of partial occlusion. Artificial neural networks are intelligent systems which from inception have taken many important cues related to structure and learning rules from the human nervous/cognition processing system. Likewise, it has been shown that advances in artificial neural network simulations can help us understand the human nervous/cognition system even furthermore. In this paper, we investigate three cognition hypothetical frameworks to vision-based recognition of banknote denominations using competitive neural networks. In order to make the task more challenging and stress-test the investigated hypotheses, we also consider the recognition of occluded banknotes. The implemented hypothetical systems are tasked to perform fast recognition of banknotes with up to 75 % occlusion. The investigated hypothetical systems are trained on Nigeria’s Naira banknotes and several experiments are performed to demonstrate the findings presented within this work.  相似文献   

9.
S Waner  Y H Wu 《Bio Systems》1988,21(2):115-124
We propose an automata-theoretical framework for structured hierarchical control, in terms of rules and meta-rules, for sequences of moves on a graph. This leads to a notion of a "universal" hierarchically structured automaton mu which can move on a given graph in such a way as to emulate any automaton which moves on that graph in response to inputs. This emulation is achieved via a mapping of the inputs in the given automaton to those of mu, and we think of such a mapping as an encoding of the given automaton. We see in several examples that efficient encodings of graph-search algorithms correspond to their natural hierarchical structure (in terms of rules and meta-rules), and this leads one to a precise notion of the "depth" of an automaton which moves on a given graph. By way of application, we discuss a proposed structure of a series of stochastic neural networks which can learn, by example, to encode a given sequence of moves on a graph, so that the encoding obtained is structurally the "natural" one for the given sequence of moves. Thus, such a learning system would perform both structural pattern recognition (in terms of "patterns" of moves), and encoding based on a desired outcome.  相似文献   

10.
Eskov  V. M.  Pyatin  V. F.  Eskov  V. V.  Ilyashenko  L. K. 《Biophysics》2019,64(2):293-299
Biophysics - This paper presents two new fundamental principles of the functioning of real neural networks of the brain. These principles have inspired the design of artificial neural networks (a...  相似文献   

11.
Nodes in networks are often of different types, and in this sense networks are differentiated. Here we examine the relationship between network differentiation and network size in networks under economic or natural selective pressure, such as electronic circuits (networks of electronic components), Legos (networks of Lego pieces), businesses (networks of employees), universities (networks of faculty), organisms (networks of cells), ant colonies (networks of ants), and nervous systems (networks of neurons). For each of these we find that (i) differentiation increases with network size, and (ii) the relationship is consistent with a power law. These results are explained by a hypothesis that, because nodes are costly to build and maintain in such "selected networks", network size is optimized, and from this the power-law relationship may be derived. The scaling exponent depends on the particular kind of network, and is determined by the degree to which nodes are used in a combinatorial fashion to carry out network-level functions. We find that networks under natural selection (organisms, ant colonies, and nervous systems) have much higher combinatorial abilities than the networks for which human ingenuity is involved (electronic circuits, Legos, businesses, and universities). A distinct but related optimization hypothesis may be used to explain scaling of differentiation in competitive networks (networks where the nodes themselves, rather than the entire network, are under selective pressure) such as ecosystems (networks of organisms).  相似文献   

12.
A fundamental question in the field of artificial neural networks is what set of problems a given class of networks can perform (computability). Such a problem can be made less general, but no less important, by asking what these networks could learn by using a given training procedure (learnability). The basic purpose of this paper is to address the learnability problem. Specifically, it analyses the learnability of sequential RAM-based neural networks. The analytical tools used are those of Automata Theory. In this context, this paper establishes which class of problems and under what conditions such networks, together with their existing learning rules, can learn and generalize. This analysis also yields techniques for both extracting knowledge from and inserting knowledge into the networks. The results presented here, besides helping in a better understanding of the temporal behaviour of sequential RAM-based networks, could also provide useful insights for the integration of the symbolic/connectionist paradigms.  相似文献   

13.
14.
In most animals, natural stimuli are characterized by a high degree of redundancy, limiting the ensemble of ecologically valid stimuli to a significantly reduced subspace of the representation space. Neural encodings can exploit this redundancy and increase sensing efficiency by generating low-dimensional representations that retain all information essential to support behavior. In this study, we investigate whether such an efficient encoding can be found to support a broad range of echolocation tasks in bats. Starting from an ensemble of echo signals collected with a biomimetic sonar system in natural indoor and outdoor environments, we use independent component analysis to derive a low-dimensional encoding of the output of a cochlear model. We show that this compressive encoding retains all essential information. To this end, we simulate a range of psycho-acoustic experiments with bats. In these simulations, we train a set of neural networks to use the encoded echoes as input while performing the experiments. The results show that the neural networks’ performance is at least as good as that of the bats. We conclude that our results indicate that efficient encoding of echo information is feasible and, given its many advantages, very likely to be employed by bats. Previous studies have demonstrated that low-dimensional encodings allow for task resolution at a relatively high level. In contrast to previous work in this area, we show that high performance can also be achieved when low-dimensional filters are derived from a data set of realistic echo signals, not tailored to specific experimental conditions.  相似文献   

15.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.  相似文献   

16.
Many learning rules for neural networks derive from abstract objective functions. The weights in those networks are typically optimized utilizing gradient ascent on the objective function. In those networks each neuron needs to store two variables. One variable, called activity, contains the bottom-up sensory-fugal information involved in the core signal processing. The other variable typically describes the derivative of the objective function with respect to the cell's activity and is exclusively used for learning. This variable allows the objective function's derivative to be calculated with respect to each weight and thus the weight update. Although this approach is widely used, the mapping of such two variables onto physiology is unclear, and these learning algorithms are often considered biologically unrealistic. However, recent research on the properties of cortical pyramidal neurons shows that these cells have at least two sites of synaptic integration, the basal and the apical dendrite, and are thus appropriately described by at least two variables. Here we discuss whether these results could constitute a physiological basis for the described abstract learning rules. As examples we demonstrate an implementation of the backpropagation of error algorithm and a specific self-supervised learning algorithm using these principles. Thus, compared to standard, one-integration-site neurons, it is possible to incorporate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity.  相似文献   

17.
This paper concerns processing of genomes of artificial (computer-simulated) organisms. Of special interest is the process of translation of genotypes into phenotypes, and utilizing the mapping information obtained during such translation. If there exists more than one genetic encoding in a single artificial life model, then the translation may also occur between different encodings. The obtained mapping information allows to present genes-phenes relationships visually and interactively to a person, in order to increase understanding of the genotype-tophenotype translation process and genetic encoding properties. As the mapping associates parts of the source sequence with the translated destination, it may be also used to trace genes, phenes, and their relationships during simulated evolution. A mappings composition procedure is formally described, and a simple method of visual mapping presentation is established. Finally, advanced visualizations of gene-phene relationships are demonstrated as practical examples of introduced techniques. These visualizations concern genotypes expressed in various encodings, including an encoding which exhibits polygenic and pleiotropic properties.  相似文献   

18.
The functional significance of alternate forms of plasticity in the brain (such as apoptosis and neurogenesis) is not easily observable with biological methods. Employing Hebbian dynamics for synaptic weight development, a three-layer neural network model of the hippocampus is used to simulate nonsupervised (autonomous) learning in the context of apoptosis and neurogenesis. This learning is applied to the characters of a pair of related alphabets, first the Roman and then the Greek, resulting in a set of encodings endogenously developed by the network. The learning performance takes the form of a U-shaped curve, showing that apoptosis and neurogenesis favorably inform memory development. We also discover that networks that converge very quickly on the Roman alphabet take much longer to handle the Greek, while networks which converge over an extended timeframe can then adapt very quickly to the new language. We find that the effect becomes increasingly pronounced as the number of neurons in the dentate gyrus layer decreases, and identify a strong correlation between cases where the Roman alphabet is quickly learned and cases where a few neurons saturate many of their weights almost immediately, minimizing participation of other neurons. Cases where learning the Roman alphabet requires more time lead to larger numbers of neurons participating with a larger diversity in synaptic weights. We present an information-theoretic argument about why this implies a better, more flexible learning system and why it leads to faster subsequent correlated Greek alphabet learning, and propose that the reason that apoptosis and neurogenesis work is that they promote this effect  相似文献   

19.
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.  相似文献   

20.
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号