首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the "edge of chaos" where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.  相似文献   

2.
Learning flexible sensori-motor mappings in a complex network   总被引:1,自引:1,他引:0  
Given the complex structure of the brain, how can synaptic plasticity explain the learning and forgetting of associations when these are continuously changing? We address this question by studying different reinforcement learning rules in a multilayer network in order to reproduce monkey behavior in a visuomotor association task. Our model can only reproduce the learning performance of the monkey if the synaptic modifications depend on the pre- and postsynaptic activity, and if the intrinsic level of stochasticity is low. This favored learning rule is based on reward modulated Hebbian synaptic plasticity and shows the interesting feature that the learning performance does not substantially degrade when adding layers to the network, even for a complex problem.  相似文献   

3.
Animals with rudimentary innate abilities require substantial learning to transform those abilities into useful skills, where a skill can be considered as a set of sensory–motor associations. Using linear neural network models, it is proved that if skills are stored as distributed representations, then within-lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill. More importantly, it is shown that this “free-lunch” learning (FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn. Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL-induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate.  相似文献   

4.
Presented here is a neuromimetic model for the learning of associations between activity patterns originating from recoding layers. These layers are described as networks of cellular clusters made up of competitive formal neurons. A rule of synaptic plasticity with improved neurobiological realism is proposed; it allows for fast learning of large sets of associations.  相似文献   

5.
Fusi S 《Biological cybernetics》2002,87(5-6):459-470
Synaptic plasticity is believed to underlie the formation of appropriate patterns of connectivity that stabilize stimulus-selective reverberations in the cortex. Here we present a general quantitative framework for studying the process of learning and memorizing of patterns of mean spike rates. General considerations based on the limitations of material (biological or electronic) synaptic devices show that most learning networks share the palimpsest property: old stimuli are forgotten to make room for the new ones. In order to prevent too-fast forgetting, one can introduce a stochastic mechanism for selecting only a small fraction of synapses to be changed upon the presentation of a stimulus. Such a mechanism can be easily implemented by exploiting the noisy fluctuations in the pre- and postsynaptic activities to be encoded. The spike-driven synaptic dynamics described here can implement such a selection mechanism to achieve slow learning, which is shown to maximize the performance of the network as an associative memory.  相似文献   

6.
While learning and development are well characterized in feedforward networks, these features are more difficult to analyze in recurrent networks due to the increased complexity of dual dynamics – the rapid dynamics arising from activation states and the slow dynamics arising from learning or developmental plasticity. We present analytical and numerical results that consider dual dynamics in a recurrent network undergoing Hebbian learning with either constant weight decay or weight normalization. Starting from initially random connections, the recurrent network develops symmetric or near-symmetric connections through Hebbian learning. Reciprocity and modularity arise naturally through correlations in the activation states. Additionally, weight normalization may be better than constant weight decay for the development of multiple attractor states that allow a diverse representation of the inputs. These results suggest a natural mechanism by which synaptic plasticity in recurrent networks such as cortical and brainstem premotor circuits could enhance neural computation and the generation of motor programs. Received: 27 April 1998 / Accepted in revised form: 16 March 1999  相似文献   

7.
具有竞争指针的短时记忆神经网络模型   总被引:1,自引:0,他引:1  
在我们以前提出的短时记忆神经网络模型基础上[3],我们在新模型中引入突触竞争机制,提出了一个新的短时记忆神经网络模型。模型仍由两个神经网络所组成;其一为与长时记忆共有的信息内容表达网络,另一个为指针神经元环路。由于表达区神经元与指针神经元间的突触权重的竞争,使得模型可以表现出由干扰引起的短时记忆的遗忘。相应于自由回忆序列位置效应和汉字组块两个心理学实验,对模型做了计算机仿真。仿真结果显示模型的行为与两个心理实验定量地符合得很好。由此表明现在的模型更合适于作为短时记忆的模型。  相似文献   

8.
Neural network models describe semantic priming effects by way of mechanisms of activation of neurons coding for words that rely strongly on synaptic efficacies between pairs of neurons. Biologically inspired Hebbian learning defines efficacy values as a function of the activity of pre- and post-synaptic neurons only. It generates only pair associations between words in the semantic network. However, the statistical analysis of large text databases points to the frequent occurrence not only of pairs of words (e.g., “the way”) but also of patterns of more than two words (e.g., “by the way”). The learning of these frequent patterns of words is not reducible to associations between pairs of words but must take into account the higher level of coding of three-word patterns. The processing and learning of pattern of words challenges classical Hebbian learning algorithms used in biologically inspired models of priming. The aim of the present study was to test the effects of patterns on the semantic processing of words and to investigate how an inter-synaptic learning algorithm succeeds at reproducing the experimental data. The experiment manipulates the frequency of occurrence of patterns of three words in a multiple-paradigm protocol. Results show for the first time that target words benefit more priming when embedded in a pattern with the two primes than when only associated with each prime in pairs. A biologically inspired inter-synaptic learning algorithm is tested that potentiates synapses as a function of the activation of more than two pre- and post-synaptic neurons. Simulations show that the network can learn patterns of three words to reproduce the experimental results.  相似文献   

9.
Motor learning with unstable neural representations   总被引:2,自引:0,他引:2  
Rokni U  Richardson AG  Bizzi E  Seung HS 《Neuron》2007,54(4):653-666
It is often assumed that learning takes place by changing an otherwise stable neural representation. To test this assumption, we studied changes in the directional tuning of primate motor cortical neurons during reaching movements performed in familiar and novel environments. During the familiar task, tuning curves exhibited slow random drift. During learning of the novel task, random drift was accompanied by systematic shifts of tuning curves. Our analysis suggests that motor learning is based on a surprisingly unstable neural representation. To explain these results, we propose that motor cortex is a redundant neural network, i.e., any single behavior can be realized by multiple configurations of synaptic strengths. We further hypothesize that synaptic modifications underlying learning contain a random component, which causes wandering among synaptic configurations with equivalent behaviors but different neural representations. We use a simple model to explore the implications of these assumptions.  相似文献   

10.
Although already William James and, more explicitly, Donald Hebb''s theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are “potential synapses” defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the “effectual network connectivity”, that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.  相似文献   

11.
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.  相似文献   

12.
Brain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storage is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. These memories are stored in a distributed fashion throughout connectivity matrix, and individual synaptic connections have only a small influence. Although memory-specific connections are increased in number, the total number of inputs and outputs of neurons undergo only small changes during stimulation. We find that homeostatic structural plasticity induces a specific type of “silent memories”, different from conventional attractor states.  相似文献   

13.
Kalaska JF  Green A 《Neuron》2007,54(4):500-502
In redundant neural networks, many different combinations of connection weights will produce the same output, thereby providing many possible solutions for a given computation. In this issue of Neuron, Rokni et al. propose that the arm movement representations in the cerebral cortex act like redundant networks that drift randomly between different synaptic configurations with equivalent input-output behavior because of random noise in the adaptive learning mechanism.  相似文献   

14.
Non-linear data structure extraction using simple hebbian networks   总被引:1,自引:0,他引:1  
. We present a class a neural networks algorithms based on simple hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing principal component analysis (PCA). In this paper, this is extended to exploratory projection pursuit (EPP), which is a statistical method for investigating structure in high-dimensional data sets. As opposed to previous proposals for networks which learn using hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network literature on non-linear PCA. Received: 30 May 1994/Accepted in revised form: 18 November 1994  相似文献   

15.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.  相似文献   

16.
Speed MP 《Animal behaviour》2001,61(1):205-216
The evolution of aposematism is difficult to explain because: (1) new aposematic morphs will be relatively rare and thus risk extinction during predator education; and (2) aposematic morphs lack the protection of crypsis, and thus appear to invite attacks. I describe a simple method for evaluating whether rare aposematic morphs may be selectively advantaged by their effects on predator psychologies. Using a simulated virtual predator, I consider the advantages that might accrue to dispersed and aggregated morphs if aposematic prey can cause neophobic avoidance, accelerate avoidance learning and decelerate predator forgetting. Simulations show that aposematism is very hard to explain unless there are particular combinations of ecological and psychological factors. If prey are dispersed throughout a locality then aposematism will be favoured only if (1) there is neophobia, learning effects and forgetting or if (2) there are learning effects and warning signals reduce forgetting rates. However, the best scenario for aposematic advantage involves learning rates, forgetting and neophobia when prey are aggregated. Prey aggregation has two important effects. First, it is a highly effective way to maximize the per capita benefits of the neophobia. Second, after an attack on a single prey the benefits of learnt aversions will be immediately conferred on the surviving members of an aggregation without the diluting effects of forgetting. Aggregation therefore provides good protection against forgetting. The simulations thus provide new insights into the complexities of aposematic protection and suggest some important directions for empirical work. Copyright 2001 The Association for the Study of Animal Behaviour.  相似文献   

17.
Changes in neural connectivity are thought to underlie the most permanent forms of memory in the brain. We consider two models, derived from the clusteron (Mel, Adv Neural Inf Process Syst 4:35-42, 1992), to study this method of learning. The models show a direct relationship between the speed of memory acquisition and the probability of forming appropriate synaptic connections. Moreover, the strength of learned associations grows with the number of fibers that have taken part in the learning process. We provide simple and intuitive explanations of these two results by analyzing the distribution of synaptic activations. The obtained insights are then used to extend the model to perform novel tasks: feature detection, and learning spatio-temporal patterns. We also provide an analytically tractable approximation to the model to put these observations on a firm basis. The behavior of both the numerical and analytical models correlate well with experimental results of learning tasks which are thought to require a reorganization of neuronal networks.  相似文献   

18.
Learning and memory in mimicry: II. Do we understand the mimicry spectrum?   总被引:4,自引:0,他引:4  
The evolution of mimicry is driven by the behaviour of predators. However, there has been little systematic testing of the sensitivity of evolutionary predictions to variations in assumptions about predator learning and forgetting. To test how robust mimicry theory is to such behavioural modifications we combined sets of rules describing ways in which learning and forgetting might operate in vertebrate predators into 29 computer predator behaviour systems. These systems were applied in simulations of simplified natural mimicry situations, particularly investigating the nature of density-dependence and the benefits and losses conferred by mimicry across a spectrum of payabilities. The classical Batesian-Muellerian spectrum was generated only by two of our 29 predator behaviour systems. Both of these ‘classical predators' had extreme asymptotes of learning and fixed rate, time dependent forgetting. All edible mimics were treated by them as Batesian in that they parasitized their model's protection and had positive monotonic effects of density on model-mimic attack rates. All defended mimics were treated as Muellerian (Mullerian) in that their presence benefited their Model's protection, and showed negative monotonic density effects on attack rates. With the remaining 27 systems Batesian or Muellerian relationships extended beyond their conventional edibility boundaries. In some cases, Muellerian mimicry extended into the edible region of the ‘palatability spectrum’ (we term this quasi-Muellerian mimicry), and in others Batesian mimicry extended into the ‘unpalatable’, defended half of the spectrum (quasi-Batesian mimicry). Although most of the 29 behaviour systems included at least some regions of true Batesian and Muellerian mimicries, if forgetting was triggered by avoidance events (as suggested by J.E. Huheey) rather than by the passage of time then the mimicry spectrum excluded Mullerian mimicry altogether, and was composed of Batesian and quasi-Batesian mimicries. In addition the classical prediction of monotonic density-dependent predation was shown not to be robust against variations in the forgetting algorithm. Time based forgetting which is retarded by observations of prey, or which varies its rate according to the degree of pleasantness or unpleasantness of a prey generates non-monotonic results. At low mimic densities there is a positive effect on attack rates and at higher densities a negative effect. Overall, the mode of forgetting has a more significant effect on mimetic relationships than the rate of learning. It seems to matter little whether learning and forgetting are switched or gradual functions. Predictions about mimetic evolution are therefore sensitive to assumptions about predator behaviour, though more so to variations in forgetting than learning rate. Based on findings from animal psychology and mimetic populations, we are able to rule out a number of predator behaviour systems. We suggest that the most credible of our 29 predators are those which generate results which incorporate Batesian, quasi-Batesian and Muellerian mimicries across the ‘palatability spectrum’.  相似文献   

19.
Brunel N  Hakim V  Isope P  Nadal JP  Barbour B 《Neuron》2004,43(5):745-757
It is widely believed that synaptic modifications underlie learning and memory. However, few studies have examined what can be deduced about the learning process from the distribution of synaptic weights. We analyze the perceptron, a prototypical feedforward neural network, and obtain the optimal synaptic weight distribution for a perceptron with excitatory synapses. It contains more than 50% silent synapses, and this fraction increases with storage reliability: silent synapses are therefore a necessary byproduct of optimizing learning and reliability. Exploiting the classical analogy between the perceptron and the cerebellar Purkinje cell, we fitted the optimal weight distribution to that measured for granule cell-Purkinje cell synapses. The two distributions agreed well, suggesting that the Purkinje cell can learn up to 5 kilobytes of information, in the form of 40,000 input-output associations.  相似文献   

20.
In spike-timing-dependent plasticity (STDP) the synapses are potentiated or depressed depending on the temporal order and temporal difference of the pre- and post-synaptic signals. We present a biophysical model of STDP which assumes that not only the timing, but also the shapes of these signals influence the synaptic modifications. The model is based on a Hebbian learning rule which correlates the NMDA synaptic conductance with the post-synaptic signal at synaptic location as the pre- and post-synaptic quantities. As compared to a previous paper [Saudargiene, A., Porr, B., Worgotter, F., 2004. How the shape of pre- and post-synaptic signals can influence stdp: a biophysical model. Neural Comp.], here we show that this rule reproduces the generic STDP weight change curve by using real neuronal input signals and combinations of more than two (pre- and post-synaptic) spikes. We demonstrate that the shape of the STDP curve strongly depends on the shape of the depolarising membrane potentials, which induces learning. As these potentials vary at different locations of the dendritic tree, model predicts that synaptic changes are location dependent. The model is extended to account for the patterns of more than two spikes of the pre- and post-synaptic cells. The results show that STDP weight change curve is also activity dependent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号