首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.  相似文献   

2.
Theta phase precession in rat hippocampal place cells is hypothesized to contribute to memory encoding of running experience in the sense that it provides the ideal timing for synaptic plasticity and enables the asymmetric associative connections under the Hebbian learning rule with asymmetric time window (Yamaguchi 2003). When the sequence of place fields is considered as the episodic memory of running experience, a given spatial route should be accurately stored in spite of differing overlap extent among place fields and varying running velocity. Using a hippocampal network model with phase precession and the Hebbian learning rule with asymmetric time window, we investigate the memory encoding of place field sequences in a single traversal experience. Computer experiments show that place fields cannot be stored correctly until an input-dependent feature is introduced into the learning rule. These experiments further indicate that there exists an optimum value for the saturation level of synaptic plasticity and the speed of synaptic plasticity in the learning rule, which are correlated with, respectively, the overlap extent of place field sequence and the running velocity of animal during traversal. A comparison of these results with biological evidences shows good agreement and suggests that behavior-dependent regulation of the learning rule is necessary for memory encoding.  相似文献   

3.
We assume that Hebbian learning dynamics (HLD) and spatiotemporal learning dynamics (SLD) are involved in the mechanism of synaptic plasticity in the hippocampal neurons. While HLD is driven by pre- and postsynaptic spike timings through the backpropagating action potential, SLD is evoked by presynaptic spike timings alone. Since the backpropagation attenuates as it nears the distal dendrites, we assume an extreme case as a neuron model where HLD exists only at proximal dendrites and SLD exists only at the distal dendrites. We examined how the synaptic weights change in response to three types of synaptic inputs in computer simulations. First, in response to a Poisson train having a constant mean frequency, the synaptic weights in HLD and SLD are qualitatively similar. Second, SLD responds more rapidly than HLD to synchronous input patterns, while each responds to them. Third, HLD responds more rapidly to more frequent inputs, while SLD shows fluctuating synaptic weights. These results suggest an encoding hypothesis in that a transient synchronous structure in spatiotemporal input patterns will be encoded into distal dendrites through SLD and that persistent synchrony or firing rate information will be encoded into proximal dendrites through HLD.  相似文献   

4.
A model of motion sensitivity as observed in some cells of area V1 of the visual cortex is proposed. Motion sensitivity is achieved by a combination of different spatiotemporal receptive fields, in particular, spatial and temporal differentiators. The receptive fields emerge if a Hebbian learning rule is applied to the network. Similar to a Linsker model the network has a spatially convergent, linear feedforward structure. Additionally, however, delays omnipresent in the brain are incorporated in the model. The emerging spatiotemporal receptive fields are derived explicitly by extending the approach of MacKay and Miller. The response characteristic of the network is calculated in frequency space and shows that the network can be considered as a spacetime filter for motion in one direction. The emergence of different types of receptive field requires certain structural constraints regarding the spatial and temporal arborisation. These requirements can be derived from the theoretical analysis and might be compared with neuroanatomical data. In this way an explicit link between structure and function of the network is established.  相似文献   

5.
A model for the development of spatiotemporal receptive fields of simple cells in the visual cortex is proposed. The model is based on the 1990 hypothesis of Saul and Humphrey that the convergence of four types of input onto a cortical cell, viz. non-lagged ON and OFF inputs and lagged ON and OFF inputs, underlies the spatial and temporal structure of the receptive fields. It therefore explains both orientation and direction selectivity of simple cells. The response properties of the four types of input are described by the product of linear spatial and temporal response functions. Extending the 1994 model of one of the authors (K.D. Miller), we describe the development of spatiotemporal receptive fields as a Hebbian learning process taking into account not only spatial but also temporal correlations between the different inputs. We derive the correlation functions that drive the development both for the period before and after eye-opening and demonstrate how the joint development of orientation and direction selectivity can be understood in the framework of correlation-based learning. Our investigation is split into two parts that are presented in two papers. In the first, the model for the response properties and for the development of direction-selective receptive fields is presented. In the second paper we present simulation results that are compared with experimental data, and also provide a first analysis of our model. Received: 18 June 1997 / Accepted: 16 September 1997  相似文献   

6.
In the last decades a standard model regarding the function of the hippocampus in memory formation has been established and tested computationally. It has been argued that the CA3 region works as an auto-associative memory and that its recurrent fibers are the actual storing place of the memories. Furthermore, to work properly CA3 requires memory patterns that are mutually uncorrelated. It has been suggested that the dentate gyrus orthogonalizes the patterns before storage, a process known as pattern separation. In this study we review the model when random input patterns are presented for storage and investigate whether it is capable of storing patterns of more realistic entorhinal grid cell input. Surprisingly, we find that an auto-associative CA3 net is redundant for random inputs up to moderate noise levels and is only beneficial at high noise levels. When grid cell input is presented, auto-association is even harmful for memory performance at all levels. Furthermore, we find that Hebbian learning in the dentate gyrus does not support its function as a pattern separator. These findings challenge the standard framework and support an alternative view where the simpler EC-CA1-EC network is sufficient for memory storage.  相似文献   

7.
This paper presents a possible context-sensitive mechanism in a neural network and at single neuron levels based on the experiments of hippocampal CA1 and their theoretical models. First, the spatiotemporal learning rule (STLR, non-Hebbian) and the Hebbian rule (HEBB) are experimentally shown to coexist in dendrite–soma interactions in single hippocampal pyramidal cells of CA1. Second, the functional differences between STLR and HEBB are theoretically shown in pattern separation and pattern completion. Third, the interaction between STLR and HEBB in neural levels is proposed to play an important role in forming a selective context determined by value information, which is related to expected reward and behavioral estimation.  相似文献   

8.
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the "edge of chaos" where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.  相似文献   

9.
Bender VA  Feldman DE 《Neuron》2006,51(2):153-155
Backpropagating action potentials (bAPs) are an important signal for associative synaptic plasticity in many neurons, but they often fail to fully invade distal dendrites. In this issue of Neuron, Sj?str?m and H?usser show that distal propagation failure leads to a spatial gradient of Hebbian plasticity in neocortical pyramidal cells. This gradient can be overcome by cooperative distal synaptic input, leading to fundamentally distinct Hebbian learning rules for distal versus proximal synapses.  相似文献   

10.
Synapses may undergo long-term increases or decreases in synaptic strength dependent on critical differences in the timing between pre-and postsynaptic activity. Such spike-timing-dependent plasticity (STDP) follows rules that govern how patterns of neural activity induce changes in synaptic strength. Synaptic plasticity in the dorsal cochlear nucleus (DCN) follows Hebbian and anti-Hebbian patterns in a cell-specific manner. Here we show that these opposing responses to synaptic activity result from differential expression of two signaling pathways. Ca2+/calmodulin-dependent protein kinase II (CaMKII) signaling underlies Hebbian postsynaptic LTP in principal cells. By contrast, in interneurons, a temporally precise anti-Hebbian synaptic spike-timing rule results from the combined effects of postsynaptic CaMKII-dependent LTP and endocannabinoid-dependent presynaptic LTD. Cell specificity in the circuit arises from selective targeting of presynaptic CB1 receptors in different axonal terminals. Hence, pre- and postsynaptic sites of expression determine both the sign and timing requirements of long-term plasticity in interneurons.  相似文献   

11.
We study the influence of nonlocal intraspecies prey competition on the spatiotemporal patterns arising behind predator invasions in two oscillatory reaction–diffusion integro-differential models. We use three common types of integral kernels as well as develop a caricature system, to describe the influence of the standard deviation and kurtosis of the kernel function on the patterns observed. We find that nonlocal competition can destabilize the spatially homogeneous state behind the invasion and lead to the formation of complex spatiotemporal patterns, including stationary spatially periodic patterns, wave trains and irregular spatiotemporal oscillations. In addition, the caricature system illustrates how large standard deviation and low kurtosis facilitate the formation of these spatiotemporal patterns. This suggests that nonlocal competition may be an important mechanism underlying spatial pattern formation, particularly in systems where the competition between individuals varies over space in a platykurtic manner.  相似文献   

12.
We study the influence of nonlocal intraspecies prey competition on the spatiotemporal patterns arising behind predator invasions in two oscillatory reaction–diffusion integro-differential models. We use three common types of integral kernels as well as develop a caricature system, to describe the influence of the standard deviation and kurtosis of the kernel function on the patterns observed. We find that nonlocal competition can destabilize the spatially homogeneous state behind the invasion and lead to the formation of complex spatiotemporal patterns, including stationary spatially periodic patterns, wave trains and irregular spatiotemporal oscillations. In addition, the caricature system illustrates how large standard deviation and low kurtosis facilitate the formation of these spatiotemporal patterns. This suggests that nonlocal competition may be an important mechanism underlying spatial pattern formation, particularly in systems where the competition between individuals varies over space in a platykurtic manner.  相似文献   

13.
The cerebral cortex utilizes spatiotemporal continuity in the world to help build invariant representations. In vision, these might be representations of objects. The temporal continuity typical of objects has been used in an associative learning rule with a short-term memory trace to help build invariant object representations. In this paper, we show that spatial continuity can also provide a basis for helping a system to self-organize invariant representations. We introduce a new learning paradigm “continuous transformation learning” which operates by mapping spatially similar input patterns to the same postsynaptic neurons in a competitive learning system. As the inputs move through the space of possible continuous transforms (e.g. translation, rotation, etc.), the active synapses are modified onto the set of postsynaptic neurons. Because other transforms of the same stimulus overlap with previously learned exemplars, a common set of postsynaptic neurons is activated by the new transforms, and learning of the new active inputs onto the same postsynaptic neurons is facilitated. We demonstrate that a hierarchical model of cortical processing in the ventral visual system can be trained with continuous transform learning, and highlight differences in the learning of invariant representations to those achieved by trace learning.  相似文献   

14.
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.  相似文献   

15.
 Taking a global analogy with the structure of perceptual biological systems, we present a system composed of two layers of real-valued sigmoidal neurons. The primary layer receives stimulating spatiotemporal signals, and the secondary layer is a fully connected random recurrent network. This secondary layer spontaneously displays complex chaotic dynamics. All connections have a constant time delay. We use for our experiments a Hebbian (covariance) learning rule. This rule slowly modifies the weights under the influence of a periodic stimulus. The effect of learning is twofold: (i) it simplifies the secondary-layer dynamics, which eventually stabilizes to a periodic orbit; and (ii) it connects the secondary layer to the primary layer, and realizes a feedback from the secondary to the primary layer. This feedback signal is added to the incoming signal, and matches it (i.e., the secondary layer performs a one-step prediction of the forthcoming stimulus). After learning, a resonant behavior can be observed: the system resonates with familiar stimuli, which activates a feedback signal. In particular, this resonance allows the recognition and retrieval of partial signals, and dynamic maintenence of the memory of past stimuli. This resonance is highly sensitive to the temporal relationships and to the periodicity of the presented stimuli. When we present stimuli which do not match in time or space, the feedback remains silent. The number of different stimuli for which resonant behavior can be learned is analyzed. As with Hopfield networks, the capacity is proportional to the size of the second, recurrent layer. Moreover, the high capacity displayed allows the implementation of our model on real-time systems interacting with their environment. Such an implementation is reported in the case of a simple behavior-based recognition task on a mobile robot. Finally, we present some functional analogies with biological systems in terms of autonomy and dynamic binding, and present some hypotheses on the computational role of feedback connections. Received: 27 April 2001 / Accepted in revised form: 15 January 2002  相似文献   

16.
The Hebbian rule (Hebb 1949), coupled with an appropriate mechanism to limit the growth of synaptic weights, allows a neuron to learn to respond to the first principal component of the distribution of its input signals (Oja 1982). Rubner and Schulten (1990) have recently suggested the use of an anti-Hebbian rule in a network with hierarchical lateral connections. When applied to neurons with linear response functions, this model allows additional neurons to learn to respond to additional principal components (Rubner and Tavan 1989). Here we apply the model to neurons with non-linear response functions characterized by a threshold and a transition width. We propose local, unsupervised learning rules for the threshold and the transition width, and illustrate the operation of these rules with some simple examples. A network using these rules sorts the input patterns into classes, which it identifies by a binary code, with the coarser structure coded by the earlier neurons in the hierarchy.  相似文献   

17.
Many cognitive and sensorimotor functions in the brain involve parallel and modular memory subsystems that are adapted by activity-dependent Hebbian synaptic plasticity. This is in contrast to the multilayer perceptron model of supervised learning where sensory information is presumed to be integrated by a common pool of hidden units through backpropagation learning. Here we show that Hebbian learning in parallel and modular memories is more advantageous than backpropagation learning in lumped memories in two respects: it is computationally much more efficient and structurally much simpler to implement with biological neurons. Accordingly, we propose a more biologically relevant neural network model, called a tree-like perceptron, which is a simple modification of the multilayer perceptron model to account for the general neural architecture, neuronal specificity, and synaptic learning rule in the brain. The model features a parallel and modular architecture in which adaptation of the input-to-hidden connection follows either a Hebbian or anti-Hebbian rule depending on whether the hidden units are excitatory or inhibitory, respectively. The proposed parallel and modular architecture and implicit interplay between the types of synaptic plasticity and neuronal specificity are exhibited by some neocortical and cerebellar systems. Received: 13 October 1996 / Accepted in revised form: 16 October 1997  相似文献   

18.
We investigate the memory structure and retrieval of the brain and propose a hybrid neural network of addressable and content-addressable memory which is a special database model and can memorize and retrieve any piece of information (a binary pattern) both addressably and content-addressably. The architecture of this hybrid neural network is hierarchical and takes the form of a tree of slabs which consist of binary neurons with the same array. Simplex memory neural networks are considered as the slabs of basic memory units, being distributed on the terminal vertexes of the tree. It is shown by theoretical analysis that the hybrid neural network is able to be constructed with Hebbian and competitive learning rules, and some other important characteristics of its learning and memory behavior are also consistent with those of the brain. Moreover, we demonstrate the hybrid neural network on a set of ten binary numeral patters  相似文献   

19.
Associative memory networks based on quaternionic Hopfield neural network are investigated in this paper. These networks are composed of quaternionic neurons, and input, output, threshold, and connection weights are represented in quaternions, which is a class of hypercomplex number systems. The energy function of the network and the Hebbian rule for embedding patterns are introduced. The stable states and their basins are explored for the networks with three neurons and four neurons. It is clarified that there exist at most 16 stable states, called multiplet components, as the degenerated stored patterns, and each of these states has its basin in the quaternionic networks.  相似文献   

20.
Cantor coding provides an information coding scheme for temporal sequences of events. In the hippocampal CA3–CA1 network, Cantor coding-like mechanism was observed in pyramidal neurons and the relationship between input pattern and recorded responses could be described as an iterated function system. However, detailed physiological properties of the system in CA1 remain unclear. Here, we performed a detailed analysis of the properties of the system related to the physiological basis of learning and memory. First, we investigated whether the system could be simply based on a series of on–off responses of excitatory postsynaptic potential (EPSP) amplitudes. We applied a series of three spatially distinct input patterns with similar EPSP peak amplitudes. The membrane responses showed significant differences in spatial clustering properties related to the iterated function system. These results suggest that existence of some factors, which do not simply depend on a series of on–off responses but on spatial patterns in the system. Second, to confirm whether the system is dependent on the interval of sequential input, we applied spatiotemporal sequential inputs at several intervals. The optimal interval was 30 ms, similar to the physiological input from CA3 to CA1. Third, we analyzed the inhibitory network dependency of the system. After GABAA receptor blocker (gabazine) application, quality of code discrimination in the system was lower under subthreshold conditions and higher under suprathreshold conditions. These results suggest that the inhibitory network increase the difference between the responses under sub- and suprathreshold conditions. In summary, Cantor coding-like iterated function system appears to be suitable for information expression in relation to learning and memory in CA1 network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号