首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.  相似文献   

2.
Non-linear data structure extraction using simple hebbian networks   总被引:1,自引:0,他引:1  
. We present a class a neural networks algorithms based on simple hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing principal component analysis (PCA). In this paper, this is extended to exploratory projection pursuit (EPP), which is a statistical method for investigating structure in high-dimensional data sets. As opposed to previous proposals for networks which learn using hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network literature on non-linear PCA. Received: 30 May 1994/Accepted in revised form: 18 November 1994  相似文献   

3.
Neural network based temporal video segmentation   总被引:1,自引:0,他引:1  
The organization of video information in video databases requires automatic temporal segmentation with minimal user interaction. As neural networks are capable of learning the characteristics of various video segments and clustering them accordingly, in this paper, a neural network based technique is developed to segment the video sequence into shots automatically and with a minimum number of user-defined parameters. We propose to employ growing neural gas (GNG) networks and integrate multiple frame difference features to efficiently detect shot boundaries in the video. Experimental results are presented to illustrate the good performance of the proposed scheme on real video sequences.  相似文献   

4.
J Yang  P Li 《PloS one》2012,7(8):e42993
Are explicit versus implicit learning mechanisms reflected in the brain as distinct neural structures, as previous research indicates, or are they distinguished by brain networks that involve overlapping systems with differential connectivity? In this functional MRI study we examined the neural correlates of explicit and implicit learning of artificial grammar sequences. Using effective connectivity analyses we found that brain networks of different connectivity underlie the two types of learning: while both processes involve activation in a set of cortical and subcortical structures, explicit learners engage a network that uses the insula as a key mediator whereas implicit learners evoke a direct frontal-striatal network. Individual differences in working memory also differentially impact the two types of sequence learning.  相似文献   

5.
The Volterra series is a well-known method of describing non-linear dynamic systems. A major limitation of this technique is the difficulty involved in the calculation of the kernels. More recently, artificial neural networks have been used to produce black box models of non-linear dynamic systems. In this paper we show how a certain class of artificial neural networks are equivalent to Volterra series and give the equation for the nth order Volterra kernel in terms of the internal parameters of the network. The technique is then illustrated using a specific non-linear system. The kernels obtained by the method described in the paper are compared with those obtained by a Toeplitz matrix inversion technique. Received: 4 June 1993/Accepted in revised form: 2 March 1994  相似文献   

6.
7.
MOTIVATION: Apoptosis has drawn the attention of researchers because of its importance in treating some diseases through finding a proper way to block or slow down the apoptosis process. Having understood that caspase cleavage is the key to apoptosis, we find novel methods or algorithms are essential for studying the specificity of caspase cleavage activity and this helps the effective drug design. As bio-basis function neural networks have proven to outperform some conventional neural learning algorithms, there is a motivation, in this study, to investigate the application of bio-basis function neural networks for the prediction of caspase cleavage sites. RESULTS: Thirteen protein sequences with experimentally determined caspase cleavage sites were downloaded from NCBI. Bayesian bio-basis function neural networks are investigated and the comparisons with single-layer perceptrons, multilayer perceptrons, the original bio-basis function neural networks and support vector machines are given. The impact of the sliding window size used to generate sub-sequences for modelling on prediction accuracy is studied. The results show that the Bayesian bio-basis function neural network with two Gaussian distributions for model parameters (weights) performed the best and the highest prediction accuracy is 97.15 +/- 1.13%. AVAILABILITY: The package of Bayesian bio-basis function neural network can be obtained by request to the author.  相似文献   

8.
We investigated the roles of feedback and attention in training a vernier discrimination task as an example of perceptual learning. Human learning even of simple stimuli, such as verniers, relies on more complex mechanisms than previously expected – ruling out simple neural network models. These findings are not just an empirical oddity but are evidence that present models fail to reflect some important characteristics of the learning process. We will list some of the problems of neural networks and develop a new model that solves them by incorporating top-down mechanisms. Contrary to neural networks, in our model learning is not driven by the set of stimuli only. Internal estimations of performance and knowledge about the task are also incorporated. Our model implies that under certain conditions the detectability of only some of the stimuli is enhanced while the overall improvement of performance is attributed to a change of decision criteria. An experiment confirms this prediction. Received: 23 May 1996 / Accepted in revised form: 16 October 1997  相似文献   

9.
A fundamental question in the field of artificial neural networks is what set of problems a given class of networks can perform (computability). Such a problem can be made less general, but no less important, by asking what these networks could learn by using a given training procedure (learnability). The basic purpose of this paper is to address the learnability problem. Specifically, it analyses the learnability of sequential RAM-based neural networks. The analytical tools used are those of Automata Theory. In this context, this paper establishes which class of problems and under what conditions such networks, together with their existing learning rules, can learn and generalize. This analysis also yields techniques for both extracting knowledge from and inserting knowledge into the networks. The results presented here, besides helping in a better understanding of the temporal behaviour of sequential RAM-based networks, could also provide useful insights for the integration of the symbolic/connectionist paradigms.  相似文献   

10.
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.  相似文献   

11.
Despite of the many successful applications of backpropagation for training multi-layer neural networks, it has many drawbocks. For complex problems it may require a long time to train the networks, and it may not train at all. Long training time can be the result of the non-optimal parameters. It is not easy to choose appropriate value of the parameters for a particular problem. In this paper, by interconnection of fixed structure learning automata (FSLA) to the feedforward neural networks, we apply learning automata (LA) scheme for adjusting these parameters based on the observation of random response of neural networks. The main motivation in using learning automata as an adaptation algorithm is to use its capability of global optimization when dealing with multi-modal surface. The feasibility of proposed method is shown through simulations on three learning problems: exclusive-or, encoding problem, and digit recognition. The simulation results show that the adaptation of these parameters using this method not only increases the convergence rate of learning but it increases the likelihood of escaping from the local minima.  相似文献   

12.
药物从研发到临床应用需要耗费较长的时间,研发期间的投入成本可高达十几亿元。而随着医药研发与人工智能的结合以及生物信息学的飞速发展,药物活性相关数据急剧增加,传统的实验手段进行药物活性预测已经难以满足药物研发的需求。借助算法来辅助药物研发,解决药物研发中的各种问题能够大大推动药物研发进程。传统机器学习方法尤其是随机森林、支持向量机和人工神经网络在药物活性方面能够达到较高的预测精度。深度学习由于具有多层神经网络,模型可以接收高维的输入变量且不需要人工限定数据输入特征,可以拟合较为复杂的函数模型,应用于药物研发可以进一步提高各个环节的效率。在药物活性预测中应用较为广泛的深度学习模型主要是深度神经网络(deep neural networks,DNN)、循环神经网络(recurrent neural networks,RNN)和自编码器(auto encoder,AE),而生成对抗网络(generative adversarial networks,GAN)由于其生成数据的能力常常被用来和其他模型结合进行数据增强。近年来深度学习在药物分子活性预测方面的研究和应用综述表明,深度学习模型的准确度和效率均高于传统实验方法和传统机器学习方法。因此,深度学习模型有望成为药物研发领域未来十年最重要的辅助计算模型。  相似文献   

13.
In this article, the performance of a hybrid artificial neural network (i.e. scale-free and small-world) was analyzed and its learning curve compared to three other topologies: random, scale-free and small-world, as well as to the chemotaxis neural network of the nematode Caenorhabditis Elegans. One hundred equivalent networks (same number of vertices and average degree) for each topology were generated and each was trained for one thousand epochs. After comparing the mean learning curves of each network topology with the C. elegans neural network, we found that the networks that exhibited preferential attachment exhibited the best learning curves.  相似文献   

14.
A neural network architecture for data classification   总被引:1,自引:0,他引:1  
This article aims at showing an architecture of neural networks designed for the classification of data distributed among a high number of classes. A significant gain in the global classification rate can be obtained by using our architecture. This latter is based on a set of several little neural networks, each one discriminating only two classes. The specialization of each neural network simplifies their structure and improves the classification. Moreover, the learning step automatically determines the number of hidden neurons. The discussion is illustrated by tests on databases from the UCI machine learning database repository. The experimental results show that this architecture can achieve a faster learning, simpler neural networks and an improved performance in classification.  相似文献   

15.
We develop a new approach to the design of neural networks, which utilizes a collaborative framework of knowledge-driven experience. In contrast to the "standard" way of developing neural networks, which explicitly exploits experimental data, this approach incorporates a mechanism of knowledge-driven experience. The essence of the proposed scheme of learning is to take advantage of the parameters (connections) of neural networks built in the past for the same phenomenon (which might also exhibit some variability over time or space) for which are interested to construct the network on a basis of currently available data. We establish a conceptual and algorithmic framework to reconcile these two essential sources of information (data and knowledge) in the process of the development of the network. To make a presentation more focused and come up with a detailed quantification of the resulting architecture, we concentrate on the experience-based design of radial basis function neural networks (RBFNNs). We introduce several performance indexes to quantify an effect of utilization of the knowledge residing within the connections of the networks and establish an optimal level of their use. Experimental results are presented for low-dimensional synthetic data and selected datasets available at the Machine Learning Repository.  相似文献   

16.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

17.
Rich clubs arise when nodes that are ‘rich’ in connections also form an elite, densely connected ‘club’. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour.  相似文献   

18.
This paper discusses the problem of extending the domain of learning sets and introduces HERBIE, a program which achieves this through graphical procedures rather than via neural networks. It is argued that for theoretical reasons HERBIE is well-suited to serving as a benchmark for measuring generalization efficacy, and therefore to serving as a means of testing claims of emergent distributed intelligence in neural nets. The successful results of tests of HERBIE as a pattern recognizer are presented, and HERBIE's behavior is favorably compared to neural nets for several real generalization problems. Finally, applications of HERBIE independent of its serving as a generalization benchmark, particularly in the area of cognitive science, are discussed.  相似文献   

19.
20.

Background  

We present a novel method of protein fold decoy discrimination using machine learning, more specifically using neural networks. Here, decoy discrimination is represented as a machine learning problem, where neural networks are used to learn the native-like features of protein structures using a set of positive and negative training examples. A set of native protein structures provides the positive training examples, while negative training examples are simulated decoy structures obtained by reversing the sequences of native structures. Various features are extracted from the training dataset of positive and negative examples and used as inputs to the neural networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号