首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications.  相似文献   

2.
Fruit flies can learn to associate an odor with an aversive stimulus, such as a shock. New findings indicate that disrupting the expression of N-methyl-D-aspartate (NMDA) receptors in flies impairs olfactory conditioning. The findings provide support for a critical role for NMDA receptors in associative learning.  相似文献   

3.
4.
Many cognitive and sensorimotor functions in the brain involve parallel and modular memory subsystems that are adapted by activity-dependent Hebbian synaptic plasticity. This is in contrast to the multilayer perceptron model of supervised learning where sensory information is presumed to be integrated by a common pool of hidden units through backpropagation learning. Here we show that Hebbian learning in parallel and modular memories is more advantageous than backpropagation learning in lumped memories in two respects: it is computationally much more efficient and structurally much simpler to implement with biological neurons. Accordingly, we propose a more biologically relevant neural network model, called a tree-like perceptron, which is a simple modification of the multilayer perceptron model to account for the general neural architecture, neuronal specificity, and synaptic learning rule in the brain. The model features a parallel and modular architecture in which adaptation of the input-to-hidden connection follows either a Hebbian or anti-Hebbian rule depending on whether the hidden units are excitatory or inhibitory, respectively. The proposed parallel and modular architecture and implicit interplay between the types of synaptic plasticity and neuronal specificity are exhibited by some neocortical and cerebellar systems. Received: 13 October 1996 / Accepted in revised form: 16 October 1997  相似文献   

5.
A system with some degree of biological plausibility is developed to categorise items from a widely used machine learning benchmark. The system uses fatiguing leaky integrate and fire neurons, a relatively coarse point model that roughly duplicates biological spiking properties; this allows spontaneous firing based on hypo-fatigue so that neurons not directly stimulated by the environment may be included in the circuit. A novel compensatory Hebbian learning algorithm is used that considers the total synaptic weight coming into a neuron. The network is unsupervised and entirely self-organising. This is relatively effective as a machine learning algorithm, categorising with just neurons, and the performance is comparable with a Kohonen map. However the learning algorithm is not stable, and behaviour decays as length of training increases. Variables including learning rate, inhibition and topology are explored leading to stable systems driven by the environment. The model is thus a reasonable next step toward a full neural memory model.  相似文献   

6.
In Hebbian neural models synaptic reinforcement occurs when the pre- and post-synaptic neurons are simultaneously active. This causes an instability toward unlimited growth of excitatory synapses. The system can be stabilized by recurrent inhibition via modifiable inhibitory synapses. When this process is included, it is possible to dispense with the non-linear normalization or cut-off conditions which were necessary for stability in previous models. The present formulation is response-linear if synaptic changes are slow. It is self-consistent because the stabilizing effects will tend to keep most neural activity in the middle range, where neural response is approximately linear. The linearized equations are tensor invariant under a class of rotations of the state space. Using this, the response to stimulation may be derived as a set of independent modes of activity distributed over the net, which may be identified with cell assemblies. A continuously infinite set of equivalent solutions exists.  相似文献   

7.
 Synchronously spiking neurons have been observed in the cerebral cortex and the hippocampus. In computer models, synchronous spike volleys may be propagated across appropriately connected neuron populations. However, it is unclear how the appropriate synaptic connectivity is set up during development and maintained during adult learning. We performed computer simulations to investigate the influence of temporally asymmetric Hebbian synaptic plasticity on the propagation of spike volleys. In addition to feedforward connections, recurrent connections were included between and within neuron populations and spike transmission delays varied due to axonal, synaptic and dendritic transmission. We found that repeated presentations of input volleys decreased the synaptic conductances of intragroup and feedback connections while synaptic conductances of feedforward connections with short delays became stronger than those of connections with longer delays. These adaptations led to the synchronization of spike volleys as they propagated across neuron populations. The findings suggests that temporally asymmetric Hebbian learning may enhance synchronized spiking within small populations of neurons in cortical and hippocampal areas and familiar stimuli may produce synchronized spike volleys that are rapidly propagated across neural tissue. Received: 28 May 2002 / Accepted: 3 June 2002 RID="*" ID="*" Correspondence to: R. E. Suri Intelligent Optical Systems (IOS), 2520 W 237th St, Torrance, CA 90505-5217, USA (e-mail: rsuri@intopsys.com, Tel.: +1-310-5307130 ext. 108, Fax: +1-210-5307417)  相似文献   

8.
Bender VA  Feldman DE 《Neuron》2006,51(2):153-155
Backpropagating action potentials (bAPs) are an important signal for associative synaptic plasticity in many neurons, but they often fail to fully invade distal dendrites. In this issue of Neuron, Sj?str?m and H?usser show that distal propagation failure leads to a spatial gradient of Hebbian plasticity in neocortical pyramidal cells. This gradient can be overcome by cooperative distal synaptic input, leading to fundamentally distinct Hebbian learning rules for distal versus proximal synapses.  相似文献   

9.
Activity-dependent synaptic plasticity should be extremely connection specific, though experiments have shown it is not, and biophysics suggests it cannot be. Extreme specificity (near-zero “crosstalk”) might be essential for unsupervised learning from higher-order correlations, especially when a neuron has many inputs. It is well known that a normalized nonlinear Hebbian rule can learn “unmixing” weights from inputs generated by linearly combining independently fluctuating nonGaussian sources using an orthogonal mixing matrix. We previously reported that even if the matrix is only approximately orthogonal, a nonlinear-specific Hebbian rule can usually learn almost correct unmixing weights (Cox and Adams in Front Comput Neurosci 3: doi:10.3389/neuro.10.011.2009 2009). We also reported simulations that showed that as crosstalk increases from zero, the learned weight vector first moves slightly away from the crosstalk-free direction and then, at a sharp threshold level of inspecificity, jumps to a completely incorrect direction. Here, we report further numerical experiments that show that above this threshold, residual learning is driven instead almost entirely by second-order input correlations, as occurs using purely Gaussian sources or a linear rule, and any amount of crosstalk. Thus, in this “ICA” model learning from higher-order correlations, required for unmixing, requires high specificity. We compare our results with a recent mathematical analysis of the effect of crosstalk for exactly orthogonal mixing, which revealed that a second, even lower, threshold, exists below which successful learning is impossible unless weights happen to start close to the correct direction. Our simulations show that this also holds when the mixing is not exactly orthogonal. These results suggest that if the brain uses simple Hebbian learning, it must operate with extraordinarily accurate synaptic plasticity to ensure powerful high-dimensional learning. Synaptic crowding would preclude this when inputs are numerous, and we propose that the neocortex might be distinguished by special circuitry that promotes extreme specificity for high-dimensional nonlinear learning.  相似文献   

10.
11.
Lightwave has attractive characteristics such as spatial parallelism, temporal rapidity in signal processing, and frequency band vastness. In particular, the vast carrier frequency bandwidth promises novel information processing. In this paper, we propose a novel optical logic gate that learns multiple functions at frequencies different from one another, and analyze the frequency-domain multiplexing ability in the learning based on complex-valued Hebbian rule. We evaluate the averaged error function values in the learning process and the error probabilities in the realized logic functions. We investigate optimal learning parameters as well as performance dependence on the number of learning iterations and the number of parallel paths per neuron. Results show a trade-off among the learning parameters such as learning time constant and learning gain. We also find that when we prepare 10 optical path differences and conduct 200 learning iterations, the error probability completely decreases to zero in a three-function multiplexing case. However, at the same time, the error probability is tolerant of the path number. That is, even if the path number is reduced by half, error probability is found almost zero. The results can be useful to determine neural parameters for future optical neural network systems and devices that utilize the vast frequency bandwidth for frequency-domain multiplexing.  相似文献   

12.
Fusi S 《Biological cybernetics》2002,87(5-6):459-470
Synaptic plasticity is believed to underlie the formation of appropriate patterns of connectivity that stabilize stimulus-selective reverberations in the cortex. Here we present a general quantitative framework for studying the process of learning and memorizing of patterns of mean spike rates. General considerations based on the limitations of material (biological or electronic) synaptic devices show that most learning networks share the palimpsest property: old stimuli are forgotten to make room for the new ones. In order to prevent too-fast forgetting, one can introduce a stochastic mechanism for selecting only a small fraction of synapses to be changed upon the presentation of a stimulus. Such a mechanism can be easily implemented by exploiting the noisy fluctuations in the pre- and postsynaptic activities to be encoded. The spike-driven synaptic dynamics described here can implement such a selection mechanism to achieve slow learning, which is shown to maximize the performance of the network as an associative memory.  相似文献   

13.
Background: Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution.Methods and findings: We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n.Conclusions and significance: We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning.  相似文献   

14.
A novel depth-from-motion vision model based on leaky integrate-and-fire (I&F) neurons incorporates the implications of recent neurophysiological findings into an algorithm for object discovery and depth analysis. Pulse-coupled I&F neurons capture the edges in an optical flow field and the associated time of travel of those edges is encoded as the neuron parameters, mainly the time constant of the membrane potential and synaptic weight. Correlations between spikes and their timing thus code depth in the visual field. Neurons have multiple output synapses connecting to neighbouring neurons with an initial Gaussian weight distribution. A temporally asymmetric learning rule is used to adapt the synaptic weights online, during which competitive behaviour emerges between the different input synapses of a neuron. It is shown that the competition mechanism can further improve the model performance. After training, the weights of synapses sourced from a neuron do not display a Gaussian distribution, having adapted to encode features of the scenes to which they have been exposed.  相似文献   

15.
We assume that Hebbian learning dynamics (HLD) and spatiotemporal learning dynamics (SLD) are involved in the mechanism of synaptic plasticity in the hippocampal neurons. While HLD is driven by pre- and postsynaptic spike timings through the backpropagating action potential, SLD is evoked by presynaptic spike timings alone. Since the backpropagation attenuates as it nears the distal dendrites, we assume an extreme case as a neuron model where HLD exists only at proximal dendrites and SLD exists only at the distal dendrites. We examined how the synaptic weights change in response to three types of synaptic inputs in computer simulations. First, in response to a Poisson train having a constant mean frequency, the synaptic weights in HLD and SLD are qualitatively similar. Second, SLD responds more rapidly than HLD to synchronous input patterns, while each responds to them. Third, HLD responds more rapidly to more frequent inputs, while SLD shows fluctuating synaptic weights. These results suggest an encoding hypothesis in that a transient synchronous structure in spatiotemporal input patterns will be encoded into distal dendrites through SLD and that persistent synchrony or firing rate information will be encoded into proximal dendrites through HLD.  相似文献   

16.
Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one''s own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse how mirror neurons become a dynamic system that performs active inferences about the actions of others and allows joint actions despite sensorimotor delays. We explore how this system performs a projection of the self onto others, with egocentric biases to contribute to mind-reading. Finally, we argue that Hebbian learning predicts mirror-like neurons for sensations and emotions and review evidence for the presence of such vicarious activations outside the motor system.  相似文献   

17.
Topographical and functional aspects of neuronal plasticity were studied in the primary somatosensory cortex of adult rats in acute electrophysiological experiments. Under these experimental conditions, we observed short-term reversible reorganization induced by intracortical microstimulation or by an associative pairing of peripheral tactile stimulation. Both types of stimulation generate large-scale and reversible changes of the representational topography and of single cell functional properties. We present a model to simulate the spatial and functional reorganizational aspects of this type of short-term and reversible plasticity. The columnar structure of the network architecture is described and discussed from a biological point of view. The simulated architecture contains three main levels of information processing. The first one is a sensor array corresponding to the sensory surface of the hind paw. The second level, a pre-cortical relay cell array, represents the thalamo-cortical projection with different levels of excitatory and inhibitory relay cells and inhibitory nuclei. The array of cortical columns, the third level, represents stellate, double bouquet, basket and pyramidal cell interactions. The dynamics of the network are ruled by two integro-differential equations of the lateral-inhibition type. In order to implement neuronal plasticity, synaptic weight parameters in those equations are variables. The learning rules are motivated by the original concept of Hebb, but include a combination of both Hebbian and non-Hebbian rules, which modifies different intra- and inter-columnar interactions. We discuss the implications of neuronal plasticity from a behavioral point of view in terms of information processing and computational resources.  相似文献   

18.
According to Hebb's postulate for learning, information presented to a neural net during a learning session is stored in the synaptic efficacies. Long-term potentiation occurs only if the postsynaptic neuron becomes active in a time window set up by the presynaptic one. We carefully interpret and mathematically implement the Hebb rule so as to handle both stationary and dynamic objects such as single patterns and cycles. Since the natural dynamics contains a rather broad distribution of delays, the key idea is to incorporate these delays in the learning session. As theory and numerical simulation show, the resulting procedure is surprisingly robust and faithful. It also turns out that pure Hebbian learning is by selection: the network produces synaptic representations that are selected according to their resonance with the input percepts.  相似文献   

19.
Most algorithms currently used to model synaptic plasticity in self-organizing cortical networks suppose that the change in synaptic efficacy is governed by the same structuring factor, i.e., the temporal correlation of activity between pre- and postsynaptic neurons. Functional predictions generated by such algorithms have been tested electrophysiologically in the visual cortex of anesthetized and paralyzed cats. Supervised learning procedures were applied at the cellular level to change receptive field (RF) properties during the time of recording of an individual functionally identified cell. The protocols were devised as cellular analogs of the plasticity of RF properties, which is normally expressed during a critical period of postnatal development. We summarize here evidence demonstrating that changes in covariance between afferent input and postsynaptic response imposed during extracellular and intracellular conditioning can acutely induce selective long-lasting up- and down-regulations of visual responses. The functional properties that could be modified in 40% of cells submitted to differential pairing protocols include ocular dominance, orientation selectivity and orientation preference, interocular orientation disparity, and the relative dominance of ON and OFF responses. Since changes in RF properties can be induced in the adult as well, our findings also suggest that similar activity-dependent processes may occur during development and during active phases of learning under the supervision of behavioral attention or contextual signals. Such potential for plasticity in primary visual cortical neurons suggests the existence of a hidden connectivity expressing a wider functional competence than the one revealed at the spiking level. In particular, in the spatial domain the sensory synaptic integration field is larger than the classical discharge field. It can be shaped by supervised learning and its subthreshold extent can be unmasked by the pharmacological blockade of intracortical inhibition.  相似文献   

20.
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号