共查询到20条相似文献,搜索用时 15 毫秒
1.
In Hebbian neural models synaptic reinforcement occurs when the pre- and post-synaptic neurons are simultaneously active. This causes an instability toward unlimited growth of excitatory synapses. The system can be stabilized by recurrent inhibition via modifiable inhibitory synapses. When this process is included, it is possible to dispense with the non-linear normalization or cut-off conditions which were necessary for stability in previous models. The present formulation is response-linear if synaptic changes are slow. It is self-consistent because the stabilizing effects will tend to keep most neural activity in the middle range, where neural response is approximately linear. The linearized equations are tensor invariant under a class of rotations of the state space. Using this, the response to stimulation may be derived as a set of independent modes of activity distributed over the net, which may be identified with cell assemblies. A continuously infinite set of equivalent solutions exists. 相似文献
2.
Backpropagating action potentials (bAPs) are an important signal for associative synaptic plasticity in many neurons, but they often fail to fully invade distal dendrites. In this issue of Neuron, Sj?str?m and H?usser show that distal propagation failure leads to a spatial gradient of Hebbian plasticity in neocortical pyramidal cells. This gradient can be overcome by cooperative distal synaptic input, leading to fundamentally distinct Hebbian learning rules for distal versus proximal synapses. 相似文献
3.
We assume that Hebbian learning dynamics (HLD) and spatiotemporal learning dynamics (SLD) are involved in the mechanism of
synaptic plasticity in the hippocampal neurons. While HLD is driven by pre- and postsynaptic spike timings through the backpropagating
action potential, SLD is evoked by presynaptic spike timings alone. Since the backpropagation attenuates as it nears the distal
dendrites, we assume an extreme case as a neuron model where HLD exists only at proximal dendrites and SLD exists only at
the distal dendrites. We examined how the synaptic weights change in response to three types of synaptic inputs in computer
simulations. First, in response to a Poisson train having a constant mean frequency, the synaptic weights in HLD and SLD are
qualitatively similar. Second, SLD responds more rapidly than HLD to synchronous input patterns, while each responds to them.
Third, HLD responds more rapidly to more frequent inputs, while SLD shows fluctuating synaptic weights. These results suggest
an encoding hypothesis in that a transient synchronous structure in spatiotemporal input patterns will be encoded into distal
dendrites through SLD and that persistent synchrony or firing rate information will be encoded into proximal dendrites through
HLD. 相似文献
4.
5.
Background: Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution.Methods and findings: We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n.Conclusions and significance: We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning. 相似文献
6.
A new paradigm of neural network architecture is proposed that works as associative memory along with capabilities of pruning and order-sensitive learning. The network has a composite structure wherein each node of the network is a Hopfield network by itself. The Hopfield network employs an order-sensitive learning technique and converges to user-specified stable states without having any spurious states. This is based on geometrical structure of the network and of the energy function. The network is so designed that it allows pruning in binary order as it progressively carries out associative memory retrieval. The capacity of the network is 2n, where n is the number of basic nodes in the network. The capabilities of the network are demonstrated by experimenting on three different application areas, namely a Library Database, a Protein Structure Database and Natural Language Understanding. 相似文献
7.
Torres JJ Marro J Garrido PL Cortes JM Ramos F Muñoz MA 《Biophysical chemistry》2005,115(2-3):285-288
We report on both analytical and numerical results concerning stochastic Hopfield-like neural automata exhibiting the following (biologically inspired) features: (1) Neurons and synapses evolve in time as in contact with respective baths at different temperatures; (2) the connectivity between neurons may be tuned from full connection to high random dilution, or to the case of networks with the small-world property and/or scale-free architecture; and (3) there is synaptic kinetics simulating repeated scanning of the stored patterns. Although these features may apparently result in additional disorder, the model exhibits, for a wide range of parameter values, an extraordinary computational performance, and some of the qualitative behaviors observed in natural systems. In particular, we illustrate here very efficient and robust associative memory, and jumping between pattern attractors. 相似文献
8.
Lightwave has attractive characteristics such as spatial parallelism, temporal rapidity in signal processing, and frequency band vastness. In particular, the vast carrier frequency bandwidth promises novel information processing. In this paper, we propose a novel optical logic gate that learns multiple functions at frequencies different from one another, and analyze the frequency-domain multiplexing ability in the learning based on complex-valued Hebbian rule. We evaluate the averaged error function values in the learning process and the error probabilities in the realized logic functions. We investigate optimal learning parameters as well as performance dependence on the number of learning iterations and the number of parallel paths per neuron. Results show a trade-off among the learning parameters such as learning time constant and learning gain. We also find that when we prepare 10 optical path differences and conduct 200 learning iterations, the error probability completely decreases to zero in a three-function multiplexing case. However, at the same time, the error probability is tolerant of the path number. That is, even if the path number is reduced by half, error probability is found almost zero. The results can be useful to determine neural parameters for future optical neural network systems and devices that utilize the vast frequency bandwidth for frequency-domain multiplexing. 相似文献
9.
A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing
dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often
find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required.
The goal of this article is to review these rules and compare them to provide a better overview over their different properties.
Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and
some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous
representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has
existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more
complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and
their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can
be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation
based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules
treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to
be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential
Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly
meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior
to provide some guidance for possible applications. 相似文献
10.
Szu H 《International journal of neural systems》1999,9(3):175-186
Unified Lyaponov function is given for the first time to prove the learning methodologies convergence of artificial neural network (ANN), both supervised and unsupervised, from the viewpoint of the minimization of the Helmholtz free energy at the constant temperature. Early in 1982, Hopfield has proven the supervised learning by the energy minimization principle. Recently in 1996, Bell & Sejnowski has algorithmically demonstrated Independent Component Analyses (ICA) generalizing the Principal Component Analyses (PCA) that the continuing reduction of early vision redundancy happens towards the "sparse edge maps" by maximization of the ANN output entropy. We explore the combination of both as Lyaponov function of which the proven convergence gives both learning methodologies. The unification is possible because of the thermodynamics Helmholtz free energy at a constant temperature. The blind de-mixing condition for more than two objects using two sensor measurement. We design two smart cameras with short term working memory to do better image de-mixing of more than two objects. We consider channel communication application that we can efficiently mix four images using matrices [AO] and [Al] to send through two channels. 相似文献
11.
Dopamine is thought to play a major role in learning. However, while dopamine D1 receptors (D1Rs) in the prefrontal cortex (PFC) have been shown to modulate working memory-related neural activity, their role in the cellular basis of learning is unknown. We recorded activity from multiple electrodes while injecting the D1R antagonist SCH23390 in the lateral PFC as monkeys learned visuomotor associations. Blocking D1Rs impaired learning of novel associations and decreased cognitive flexibility but spared performance of already familiar associations. This suggests a greater role for prefrontal D1Rs in learning new, rather than performing familiar, associations. There was a corresponding greater decrease in neural selectivity and increase in alpha and beta oscillations in local field potentials for novel than for familiar associations. Our results suggest that weak stimulation of D1Rs observed in aging and psychiatric disorders may impair learning and PFC function by reducing neural selectivity and exacerbating neural oscillations associated with inattention and cognitive deficits. 相似文献
12.
Vaibhav A. Diwadkar Brad Flaugher Trevor Jones László Zalányi Balázs Ujfalussy Matcheri S. Keshavan Péter Érdi 《Cognitive neurodynamics》2008,2(3):207-219
Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity,
memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered
plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate
experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible
computational models will help clarify the nature of these deficits. We present a preliminary computational model designed
to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object
paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control
subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model
of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object
visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate
for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics
in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental
research and a detailed computational model of performance provides a framework within which to infer plausible biological
bases of impaired learning dynamics in schizophrenia. 相似文献
13.
P. Teissier B. Perret E. Latrille J. M. Barillere G. Corrieu 《Bioprocess and biosystems engineering》1996,14(5):231-235
The second fermentation is one of the most important steps in Champagne production. For this purpose, yeasts are grown on a wine based medium to adapt their metabolism to ethanol. Several models built with various static and dynamic neural network configurations were investigated. The main objective was to achieve real-time estimation and prediction of yeast concentration during growth. The model selected, based on recurrent neural networks, was first order with respect to the yeast concentration and to the volume of CO2 released. Temperature and pH were included as model parameters as well. Yeast concentration during growth could thus be estimated with an error lower than 3% (±1.7×106 yeasts/ml). From the measurement of initial yeast population and temperature, it was possible to predict the final yeast concentration (after 21 hours of growth) from the beginning of the growth, with about ±3×106 yeasts/ml accuracy. So a predictive control strategy of this process could be investigated. 相似文献
14.
C Heyes 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》2012,367(1603):2695-2703
Using cooperation in chimpanzees as a case study, this article argues that research on animal minds needs to steer a course between 'association-blindness'-the failure to consider associative learning as a candidate explanation for complex behaviour-and 'simple-mindedness'-the assumption that associative explanations trump more cognitive hypotheses. Association-blindness is challenged by the evidence that associative learning occurs in a wide range of taxa and functional contexts, and is a major force guiding the development of complex human behaviour. Furthermore, contrary to a common view, association-blindness is not entailed by the rejection of behaviourism. Simple-mindedness is founded on Morgan's canon, a methodological principle recommending 'lower' over 'higher' explanations for animal behaviour. Studies in the history and philosophy of science show that Morgan failed to offer an adequate justification for his canon, and subsequent attempts to justify the canon using evolutionary arguments and appeals to simplicity have not been successful. The weaknesses of association-blindness and simple-mindedness imply that there are no short-cuts to finding out about animal minds. To decide between associative and yet more cognitive explanations for animal behaviour, we have to spell them out in sufficient detail to allow differential predictions, and to test these predictions through observation and experiment. 相似文献
15.
Recent work in this laboratory has begun to cast light on the biochemical mechanisms by which a cell stores associatively acquired information. This appears to occur principally via two general pathways. The first seems to be a long-term activation of protein kinase C (resulting in long-term alterations in protein phosphorylation) while the second involves changes in RNA synthesis. One striking aspect of these mechanisms in that they seem to be conserved across the species we have studied (rabbit and Hermissenda). In the present paper we review some of the studies that support the role of protein kinase C activation and RNA synthesis in memory formation. 相似文献
16.
Jack D. Cowan 《Bulletin of mathematical biology》1990,52(1-2):73-97
The McCulloch-Pitts paper “A Logical Calculus of the Ideas Immanent in Nervous Activity” was published in theBulletin of Mathematical Biophysics in 1943, a decade before the work of Hodgkin, Huxley, Katz and Eccles. The McCulloch-Pitts neuron is an extremely simplified
representation of neural properties, based simply on the existence of a threshold for the activation of an action potential.
This work has been supported in part by Grants from the University of Chicago Brain Research Foundation, and the U.S. Department
of the Navy, Office of Naval Research (Grant No. N 00014-89J-1099). 相似文献
17.
In this paper we address the question of how interactions affect the formation and organization of receptive fields in a network composed of interacting neurons with Hebbian-type learning. We show how to partially decouple single cell effects from network effects, and how some phenomenological models can be seen as approximations to these learning networks. We show that the interaction affects the structure of receptive fields. We also demonstrate how the organization of different receptive fields across the cortex is influenced by the interaction term, and that the type of singularities depends on the symmetries of the receptive fields. 相似文献
18.
An enduring theme for theories of associative learning is the problem of explaining how configural discriminations--ones in which the significance of combinations of cues is inconsistent with the significance of the individual cues themselves-are learned. One approach has been to assume that configurations are the basic representational form on which associative processes operate, another has tried in contrast to retain elementalism. We review evidence that human learning is representationally flexible in a way that challenges both configural and elemental theories. We describe research showing that task demands, prior experience, instructions, and stimulus properties all influence whether a particular problem is solved configurally or elementally. Lines of possible future theory development are discussed. 相似文献
19.
20.
L. Dąbrowski 《Biological cybernetics》1993,68(5):451-454
In the paper a diffusion model of a neuron is treated. A new, less restrictive than usually, condition of applicability of a diffusion model is presented. As a result the point-process-to-point-process model of a neuron is obtained, which produces an output signal of the same kind as the accepted input signals. 相似文献