首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Gaussian processes compare favourably with backpropagation neural networks as a tool for regression, and Bayesian neural networks have Gaussian process behaviour when the number of hidden neurons tends to infinity. We describe a simple recurrent neural network with connection weights trained by one-shot Hebbian learning. This network amounts to a dynamical system which relaxes to a stable state in which it generates predictions identical to those of Gaussian process regression. In effect an infinite number of hidden units in a feed-forward architecture can be replaced by a merely finite number, together with recurrent connections.  相似文献   

2.
The study of experience-dependent plasticity has been dominated by questions of how Hebbian plasticity mechanisms act during learning and development. This is unsurprising as Hebbian plasticity constitutes the most fully developed and influential model of how information is stored in neural circuits and how neural circuitry can develop without extensive genetic instructions. Yet Hebbian plasticity may not be sufficient for understanding either learning or development: the dramatic changes in synapse number and strength that can be produced by this kind of plasticity tend to threaten the stability of neural circuits. Recent work has suggested that, in addition to Hebbian plasticity, homeostatic regulatory mechanisms are active in a variety of preparations. These mechanisms alter both the synaptic connections between neurons and the intrinsic electrical properties of individual neurons, in such a way as to maintain some constancy in neuronal properties despite the changes wrought by Hebbian mechanisms. Here we review the evidence for homeostatic plasticity in the central nervous system, with special emphasis on results from cortical preparations.  相似文献   

3.
Learning flexible sensori-motor mappings in a complex network   总被引:1,自引:1,他引:0  
Given the complex structure of the brain, how can synaptic plasticity explain the learning and forgetting of associations when these are continuously changing? We address this question by studying different reinforcement learning rules in a multilayer network in order to reproduce monkey behavior in a visuomotor association task. Our model can only reproduce the learning performance of the monkey if the synaptic modifications depend on the pre- and postsynaptic activity, and if the intrinsic level of stochasticity is low. This favored learning rule is based on reward modulated Hebbian synaptic plasticity and shows the interesting feature that the learning performance does not substantially degrade when adding layers to the network, even for a complex problem.  相似文献   

4.
This article proposes a stochastic method for determining the number of hidden nodes of a multilayer perceptron trained by a backpropagation algorithm. During the learning process, an auxiliary markovian algorithm controls the sizing of the hidden layers. As usual, the main idea is to promote the addition of nodes the closer the net is to a stall configuration, and to remove those units not sufficiently “lively”. The combined algorithm produces families of nets which converge fast towards well trained nets with a small number of nodes. Numerical experiments are performed both on conventional benchmarks and on realistic learning problems.These experiments show that for learning tasks of sufficiently high complexity, the additional (with respect to the conventional fixed architecture methods) complexity of our method is compensated by a greater velocity and a higher success percentage in obtaining the minimum of the error function. Received: 7 December 1992/Accepted in revised form: 23 September 1993  相似文献   

5.
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.  相似文献   

6.
Systems memory consolidation involves the transfer of memories across brain regions and the transformation of memory content. For example, declarative memories that transiently depend on the hippocampal formation are transformed into long-term memory traces in neocortical networks, and procedural memories are transformed within cortico-striatal networks. These consolidation processes are thought to rely on replay and repetition of recently acquired memories, but the cellular and network mechanisms that mediate the changes of memories are poorly understood. Here, we suggest that systems memory consolidation could arise from Hebbian plasticity in networks with parallel synaptic pathways—two ubiquitous features of neural circuits in the brain. We explore this hypothesis in the context of hippocampus-dependent memories. Using computational models and mathematical analyses, we illustrate how memories are transferred across circuits and discuss why their representations could change. The analyses suggest that Hebbian plasticity mediates consolidation by transferring a linear approximation of a previously acquired memory into a parallel pathway. Our modelling results are further in quantitative agreement with lesion studies in rodents. Moreover, a hierarchical iteration of the mechanism yields power-law forgetting—as observed in psychophysical studies in humans. The predicted circuit mechanism thus bridges spatial scales from single cells to cortical areas and time scales from milliseconds to years.  相似文献   

7.
Theta phase precession in rat hippocampal place cells is hypothesized to contribute to memory encoding of running experience in the sense that it provides the ideal timing for synaptic plasticity and enables the asymmetric associative connections under the Hebbian learning rule with asymmetric time window (Yamaguchi 2003). When the sequence of place fields is considered as the episodic memory of running experience, a given spatial route should be accurately stored in spite of differing overlap extent among place fields and varying running velocity. Using a hippocampal network model with phase precession and the Hebbian learning rule with asymmetric time window, we investigate the memory encoding of place field sequences in a single traversal experience. Computer experiments show that place fields cannot be stored correctly until an input-dependent feature is introduced into the learning rule. These experiments further indicate that there exists an optimum value for the saturation level of synaptic plasticity and the speed of synaptic plasticity in the learning rule, which are correlated with, respectively, the overlap extent of place field sequence and the running velocity of animal during traversal. A comparison of these results with biological evidences shows good agreement and suggests that behavior-dependent regulation of the learning rule is necessary for memory encoding.  相似文献   

8.
We derive generalized spin models for the development of feedforward cortical architecture from a Hebbian synaptic learning rule in a two layer neural network with nonlinear weight constraints. Our model takes into account the effects of lateral interactions in visual cortex combining local excitation and long range effective inhibition. Our approach allows the principled derivation of developmental rules for low-dimensional feature maps, starting from high-dimensional synaptic learning rules. We incorporate the effects of smooth nonlinear constraints on net synaptic weight projected from units in the thalamic layer (the fan-out) and on the net synaptic weight received by units in the cortical layer (the fan-in). These constraints naturally couple together multiple feature maps such as orientation preference and retinotopic organization. We give a detailed illustration of the method applied to the development of the orientation preference map as a special case, in addition to deriving a model for joint pattern formation in cortical maps of orientation preference, retinotopic location, and receptive field width. We show that the combination of Hebbian learning and center-surround cortical interaction naturally leads to an orientation map development model that is closely related to the XY magnetic lattice model from statistical physics. The results presented here provide justification for phenomenological models studied in Cowan and Friedman (Advances in neural information processing systems 3, 1991), Thomas and Cowan (Phys Rev Lett 92(18):e188101, 2004) and provide a developmental model realizing the synaptic weight constraints previously assumed in Thomas and Cowan (Math Med Biol 23(2):119–138, 2006).  相似文献   

9.
The back and forth of dendritic plasticity   总被引:2,自引:0,他引:2  
Williams SR  Wozny C  Mitchell SJ 《Neuron》2007,56(6):947-953
Synapses are located throughout the often-elaborate dendritic tree of central neurons. Hebbian models of plasticity require temporal association between synaptic input and neuronal output to produce long-term potentiation of excitatory transmission. Recent studies have highlighted how active dendritic spiking mechanisms control this association. Here, we review new work showing that associative synaptic plasticity can be generated without neuronal output and that the interplay between neuronal architecture and the active electrical properties of the dendritic tree regulates synaptic plasticity.  相似文献   

10.
Synapses may undergo long-term increases or decreases in synaptic strength dependent on critical differences in the timing between pre-and postsynaptic activity. Such spike-timing-dependent plasticity (STDP) follows rules that govern how patterns of neural activity induce changes in synaptic strength. Synaptic plasticity in the dorsal cochlear nucleus (DCN) follows Hebbian and anti-Hebbian patterns in a cell-specific manner. Here we show that these opposing responses to synaptic activity result from differential expression of two signaling pathways. Ca2+/calmodulin-dependent protein kinase II (CaMKII) signaling underlies Hebbian postsynaptic LTP in principal cells. By contrast, in interneurons, a temporally precise anti-Hebbian synaptic spike-timing rule results from the combined effects of postsynaptic CaMKII-dependent LTP and endocannabinoid-dependent presynaptic LTD. Cell specificity in the circuit arises from selective targeting of presynaptic CB1 receptors in different axonal terminals. Hence, pre- and postsynaptic sites of expression determine both the sign and timing requirements of long-term plasticity in interneurons.  相似文献   

11.
Learning-induced synchronization of a neural network at various developing stages is studied by computer simulations using a pulse-coupled neural network model in which the neuronal activity is simulated by a one-dimensional map. Two types of Hebbian plasticity rules are investigated and their differences are compared. For both models, our simulations show a logarithmic increase in the synchronous firing frequency of the network with the culturing time of the neural network. This result is consistent with recent experimental observations. To investigate how to control the synchronization behavior of a neural network after learning, we compare the occurrence of synchronization for four networks with different designed patterns under the influence of an external signal. The effect of such a signal on the network activity highly depends on the number of connections between neurons. We discuss the synaptic plasticity and enhancement effects for a random network after learning at various developing stages.  相似文献   

12.
Spike timing-dependent plasticity of neural circuits   总被引:12,自引:0,他引:12  
Dan Y  Poo MM 《Neuron》2004,44(1):23-30
Recent findings of spike timing-dependent plasticity (STDP) have stimulated much interest among experimentalists and theorists. Beyond the traditional correlation-based Hebbian plasticity, STDP opens up new avenues for understanding information coding and circuit plasticity that depend on the precise timing of neuronal spikes. Here we summarize experimental characterization of STDP at various synapses, the underlying cellular mechanisms, and the associated changes in neuronal excitability and dendritic integration. We also describe STDP in the context of complex spike patterns and its dependence on the dendritic location of the synapse. Finally, we discuss timing-dependent modification of neuronal receptive fields and human visual perception and the computational significance of STDP as a synaptic learning rule.  相似文献   

13.
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.  相似文献   

14.
A hybrid neural network architecture is investigated for modeling purposes. The proposed hybrid is based on the multilayer perceptron (MLP) network. In addition to the usual hidden layers, the first hidden layer is selected to be an adaptive reference pattern layer. Each unit in this new layer incorporates a reference pattern that is located somewhere in the space spanned by the input variables. The outputs of these units are the component wise-squared differences between the elements of a reference pattern and the inputs. The reference pattern layer has some resemblance to the hidden layer of the radial basis function (RBF) networks. Therefore the proposed design can be regarded as a sort of hybrid of MLP and RBF networks. The presented benchmark experiments show that the proposed hybrid can provide significant advantages over standard MLPs and RBFs in terms of fast and efficient learning, and compact network structure.  相似文献   

15.
Activity-dependent synaptic plasticity should be extremely connection specific, though experiments have shown it is not, and biophysics suggests it cannot be. Extreme specificity (near-zero “crosstalk”) might be essential for unsupervised learning from higher-order correlations, especially when a neuron has many inputs. It is well known that a normalized nonlinear Hebbian rule can learn “unmixing” weights from inputs generated by linearly combining independently fluctuating nonGaussian sources using an orthogonal mixing matrix. We previously reported that even if the matrix is only approximately orthogonal, a nonlinear-specific Hebbian rule can usually learn almost correct unmixing weights (Cox and Adams in Front Comput Neurosci 3: doi:10.3389/neuro.10.011.2009 2009). We also reported simulations that showed that as crosstalk increases from zero, the learned weight vector first moves slightly away from the crosstalk-free direction and then, at a sharp threshold level of inspecificity, jumps to a completely incorrect direction. Here, we report further numerical experiments that show that above this threshold, residual learning is driven instead almost entirely by second-order input correlations, as occurs using purely Gaussian sources or a linear rule, and any amount of crosstalk. Thus, in this “ICA” model learning from higher-order correlations, required for unmixing, requires high specificity. We compare our results with a recent mathematical analysis of the effect of crosstalk for exactly orthogonal mixing, which revealed that a second, even lower, threshold, exists below which successful learning is impossible unless weights happen to start close to the correct direction. Our simulations show that this also holds when the mixing is not exactly orthogonal. These results suggest that if the brain uses simple Hebbian learning, it must operate with extraordinarily accurate synaptic plasticity to ensure powerful high-dimensional learning. Synaptic crowding would preclude this when inputs are numerous, and we propose that the neocortex might be distinguished by special circuitry that promotes extreme specificity for high-dimensional nonlinear learning.  相似文献   

16.
Hebb and homeostasis in neuronal plasticity   总被引:22,自引:0,他引:22  
The positive-feedback nature of Hebbian plasticity can destabilize the properties of neuronal networks. Recent work has demonstrated that this destabilizing influence is counteracted by a number of homeostatic plasticity mechanisms that stabilize neuronal activity. Such mechanisms include global changes in synaptic strengths, changes in neuronal excitability, and the regulation of synapse number. These recent studies suggest that Hebbian and homeostatic plasticity often target the same molecular substrates, and have opposing effects on synaptic or neuronal properties. These advances significantly broaden our framework for understanding the effects of activity on synaptic function and neuronal excitability.  相似文献   

17.
Neural learning algorithms generally involve a number of identical processing units, which are fully or partially connected, and involve an update function, such as a ramp, a sigmoid or a Gaussian function for instance. Some variations also exist, where units can be heterogeneous, or where an alternative update technique is employed, such as a pulse stream generator. Associated with connections are numerical values that must be adjusted using a learning rule, and and dictated by parameters that are learning rule specific, such as momentum, a learning rate, a temperature, amongst others. Usually, neural learning algorithms involve local updates, and a global interaction between units is often discouraged, except in instances where units are fully connected, or involve synchronous updates. In all of these instances, concurrency within a neural algorithm cannot be fully exploited without a suitable implementation strategy. A design scheme is described for translating a neural learning algorithm from inception to implementation on a parallel machine using PVM or MPI libraries, or onto programmable logic such as FPGAs. A designer must first describe the algorithm using a specialised Neural Language, from which a Petri net (PN) model is constructed automatically for verification, and building a performance model. The PN model can be used to study issues such as synchronisation points, resource sharing and concurrency within a learning rule. Specialised constructs are provided to enable a designer to express various aspects of a learning rule, such as the number and connectivity of neural nodes, the interconnection strategies, and information flows required by the learning algorithm. A scheduling and mapping strategy is then used to translate this PN model onto a multiprocessor template. We demonstrate our technique using a Kohonen and backpropagation learning rules, implemented on a loosely coupled workstation cluster, and a dedicated parallel machine, with PVM libraries.  相似文献   

18.
It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates 'negative feedback' that balances excitation and inhibition, which contrasts with the 'positive feedback' of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs.  相似文献   

19.
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the "edge of chaos" where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.  相似文献   

20.
Neuroscientists associate the name of Donald O. Hebb with the Hebbian synapse and the Hebbian learning rule, which underlie connectionist theories and synaptic plasticity, but Hebb's work has also influenced developmental psychology, neuropsychology, perception and the study of emotions, as well as learning and memory. Here, we review the work of Hebb and its lasting influence on neuroscience in honour of the 2004 centenary of his birth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号