共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Learning in spiking neural networks by reinforcement of stochastic synaptic transmission 总被引:6,自引:0,他引:6
It is well-known that chemical synaptic transmission is an unreliable process, but the function of such unreliability remains unclear. Here I consider the hypothesis that the randomness of synaptic transmission is harnessed by the brain for learning, in analogy to the way that genetic mutation is utilized by Darwinian evolution. This is possible if synapses are "hedonistic," responding to a global reward signal by increasing their probabilities of vesicle release or failure, depending on which action immediately preceded reward. Hedonistic synapses learn by computing a stochastic approximation to the gradient of the average reward. They are compatible with synaptic dynamics such as short-term facilitation and depression and with the intricacies of dendritic integration and action potential generation. A network of hedonistic synapses can be trained to perform a desired computation by administering reward appropriately, as illustrated here through numerical simulations of integrate-and-fire model neurons. 相似文献
3.
The joint influence of recurrent feedback and noise on gain control in a network of globally coupled spiking leaky integrate-and-fire neurons is studied theoretically and numerically. The context of our work is the origin of divisive versus subtractive gain control, as mixtures of these effects are seen in a variety of experimental systems. We focus on changes in the slope of the mean firing frequency-versus-input bias (f –I) curve when the gain control signal to the cells comes from the cells’ output spikes. Feedback spikes are modeled as alpha functions that produce an additive current in the current balance equation. For generality, they occur after a fixed minimum delay. We show that purely divisive gain control, i.e. changes in the slope of the f –I curve, arises naturally with this additive negative or positive feedback, due to a linearizing actions of feedback. Negative feedback alone lowers the gain, accounting in particular for gain changes in weakly electric fish upon pharmacological opening of the feedback loop as reported by Bastian (J Neurosci 6:553–562, 1986). When negative feedback is sufficiently strong it further causes oscillatory firing patterns which produce irregularities in the f –I curve. Small positive feedback alone increases the gain, but larger amounts cause abrupt jumps to higher firing frequencies. On the other hand, noise alone in open loop linearizes the f –I curve around threshold, and produces mixtures of divisive and subtractive gain control. With both noise and feedback, the combined gain control schemes produce a primarily divisive gain control shift, indicating the robustness of feedback gain control in stochastic networks. Similar results are found when the “input” parameter is the contrast of a time-varying signal rather than the bias current. Theoretical results are derived relating the slope of the f –I curve to feedback gain and noise strength. Good agreement with simulation results are found for inhibitory and excitatory feedback. Finally, divisive feedback is also found for conductance-based feedback (shunting or excitatory) with and without noise. This article is part of a special issue on Neuronal Dynamics of Sensory Coding. 相似文献
4.
Networks of neurons produce diverse patterns of oscillations, arising from the network's global properties, the propensity of individual neurons to oscillate, or a mixture of the two. Here we describe noisy limit cycles and quasi-cycles, two related mechanisms underlying emergent oscillations in neuronal networks whose individual components, stochastic spiking neurons, do not themselves oscillate. Both mechanisms are shown to produce gamma band oscillations at the population level while individual neurons fire at a rate much lower than the population frequency. Spike trains in a network undergoing noisy limit cycles display a preferred period which is not found in the case of quasi-cycles, due to the even faster decay of phase information in quasi-cycles. These oscillations persist in sparsely connected networks, and variation of the network's connectivity results in variation of the oscillation frequency. A network of such neurons behaves as a stochastic perturbation of the deterministic Wilson-Cowan equations, and the network undergoes noisy limit cycles or quasi-cycles depending on whether these have limit cycles or a weakly stable focus. These mechanisms provide a new perspective on the emergence of rhythmic firing in neural networks, showing the coexistence of population-level oscillations with very irregular individual spike trains in a simple and general framework. 相似文献
5.
Synchrony-driven recruitment learning addresses the question of how arbitrary concepts, represented by synchronously active ensembles, may be acquired within a
randomly connected static graph of neuron-like elements. Recruitment learning in hierarchies is an inherently unstable process.
This paper presents conditions on parameters for a feedforward network to ensure stable recruitment hierarchies. The parameter
analysis is conducted by using a stochastic population approach to model a spiking neural network. The resulting network converges
to activate a desired number of units at each stage of the hierarchy. The original recruitment method is modified first by
increasing feedforward connection density for ensuring sufficient activation, then by incorporating temporally distributed
feedforward delays for separating inputs temporally, and finally by limiting excess activation via lateral inhibition. The
task of activating a desired number of units from a population is performed similarly to a temporal k-winners-take-all network. 相似文献
6.
The construction of a Spiking Neural Network (SNN), i.e. the choice of an appropriate topology and the configuration of its internal parameters, represents a great challenge for SNN based applications. Evolutionary Algorithms (EAs) offer an elegant solution for these challenges and methods capable of exploring both types of search spaces simultaneously appear to be the most promising ones. A variety of such heterogeneous optimization algorithms have emerged recently, in particular in the field of probabilistic optimization. In this paper, a literature review on heterogeneous optimization algorithms is presented and an example of probabilistic optimization of SNN is discussed in detail. The paper provides an experimental analysis of a novel Heterogeneous Multi-Model Estimation of Distribution Algorithm (hMM-EDA). First, practical guidelines for configuring the method are derived and then the performance of hMM-EDA is compared to state-of-the-art optimization algorithms. Results show hMM-EDA as a light-weight, fast and reliable optimization method that requires the configuration of only very few parameters. Its performance on a synthetic heterogeneous benchmark problem is highly competitive and suggests its suitability for the optimization of SNN. 相似文献
7.
Finding out the physical structure of neuronal circuits that governs neuronal responses is an important goal for brain research. With fast advances for large-scale recording techniques, identification of a neuronal circuit with multiple neurons and stages or layers becomes possible and highly demanding. Although methods for mapping the connection structure of circuits have been greatly developed in recent years, they are mostly limited to simple scenarios of a few neurons in a pairwise fashion; and dissecting dynamical circuits, particularly mapping out a complete functional circuit that converges to a single neuron, is still a challenging question. Here, we show that a recent method, termed spike-triggered non-negative matrix factorization (STNMF), can address these issues. By simulating different scenarios of spiking neural networks with various connections between neurons and stages, we demonstrate that STNMF is a persuasive method to dissect functional connections within a circuit. Using spiking activities recorded at neurons of the output layer, STNMF can obtain a complete circuit consisting of all cascade computational components of presynaptic neurons, as well as their spiking activities. For simulated simple and complex cells of the primary visual cortex, STNMF allows us to dissect the pathway of visual computation. Taken together, these results suggest that STNMF could provide a useful approach for investigating neuronal systems leveraging recorded functional neuronal activity. 相似文献
8.
Borisyuk R 《Bio Systems》2002,67(1-3):3-16
We study the dynamics of activity in the neural networks of enhanced integrate-and-fire elements (with random noise, refractory periods, signal propagation delay, decay of postsynaptic potential, etc.). We consider the networks composed of two interactive populations of excitatory and inhibitory neurons with all-to-all or random sparse connections. It is shown by computer simulations that the regime of regular oscillations is very stable in a broad range of parameter values. In particular, oscillations are possible even in the case of very sparse and randomly distributed inhibitory connections and high background activity. We describe two scenarios of how oscillations may appear which are similar to Andronov-Hopf and saddle-node-on-limit-cycle bifurcations in dynamical systems. The role of oscillatory dynamics for information encoding and processing is discussed. 相似文献
9.
The numerical simulation of spiking neural networks requires particular attention. On the one hand, time-stepping methods
are generic but they are prone to numerical errors and need specific treatments to deal with the discontinuities of integrate-and-fire
models. On the other hand, event-driven methods are more precise but they are restricted to a limited class of neuron models.
We present here a voltage-stepping scheme that combines the advantages of these two approaches and consists of a discretization
of the voltage state-space. The numerical simulation is reduced to a local event-driven method that induces an implicit activity-dependent time discretization (time-steps automatically increase when
the neuron is slowly varying). We show analytically that such a scheme leads to a high-order algorithm so that it accurately
approximates the neuronal dynamics. The voltage-stepping method is generic and can be used to simulate any kind of neuron
models. We illustrate it on nonlinear integrate-and-fire models and show that it outperforms time-stepping schemes of Runge-Kutta
type in terms of simulation time and accuracy.
相似文献
D. MartinezEmail: |
10.
Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. This article studies the synaptic pruning that occurs in large networks of simulated spiking neurons in the absence of specific input patterns of activity. The evolution of connections between neurons were governed by an original bioinspired spike-timing-dependent synaptic plasticity (STDP) modification rule which included a slow decay term. The network reached a steady state with a bimodal distribution of the synaptic weights that were either incremented to the maximum value or decremented to the lowest value. After 1x10(6) time steps the final number of synapses that remained active was below 10% of the number of initially active synapses independently of network size. The synaptic modification rule did not introduce spurious biases in the geometrical distribution of the remaining active projections. The results show that, under certain conditions, the model is capable of generating spontaneously emergent cell assemblies. 相似文献
11.
12.
Agnes Korcsak-Gorzo Michael G. Müller Andreas Baumbach Luziwei Leng Oliver J. Breitwieser Sacha J. van Albada Walter Senn Karlheinz Meier Robert Legenstein Mihai A. Petrovici 《PLoS computational biology》2022,18(3)
Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these “valid” states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering. 相似文献
13.
Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach. 相似文献
14.
15.
16.
We present a simulation environment called SPIKELAB which incorporates a simulator that is able to simulate large networks of spiking neurons using a distributed event driven simulation. Contrary to a time driven simulation, which is usually used to simulate spiking neural networks, our simulation needs less computational resources because of the low average activity of typical networks. The paper addresses the speed up using an event driven versus a time driven simulation and how large networks can be simulated by a distribution of the simulation using already available computing resources. It also presents a solution for the integration of digital or analogue neuromorphic circuits into the simulation process. 相似文献
17.
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. 相似文献
18.
This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules. 相似文献
19.
With the various simulators for spiking neural networks developed in recent years, a variety of numerical solution methods
for the underlying differential equations are available. In this article, we introduce an approach to systematically assess
the accuracy of these methods. In contrast to previous investigations, our approach focuses on a completely deterministic
comparison and uses an analytically solved model as a reference. This enables the identification of typical sources of numerical
inaccuracies in state-of-the-art simulation methods. In particular, with our approach we can separate the error of the numerical
integration from the timing error of spike detection and propagation, the latter being prominent in simulations with fixed
timestep. To verify the correctness of the testing procedure, we relate the numerical deviations to theoretical predictions
for the employed numerical methods. Finally, we give an example of the influence of simulation artefacts on network behaviour
and spike-timing-dependent plasticity (STDP), underlining the importance of spike-time accuracy for the simulation of STDP. 相似文献
20.
Tosh CR Ruxton GD 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》2007,362(1479):455-460
Artificial neural networks are becoming increasingly popular as predictive statistical tools in ecosystem ecology and as models of signal processing in behavioural and evolutionary ecology. We demonstrate here that a commonly used network in ecology, the three-layer feed-forward network, trained with the backpropagation algorithm, can be extremely sensitive to the stochastic variation in training data that results from random sampling of the same underlying statistical distribution, with networks converging to several distinct predictive states. Using a random walk procedure to sample error-weight space, and Sammon dimensional reduction of weight arrays, we demonstrate that these different predictive states are not artefactual, due to local minima, but lie at the base of major error troughs in the error-weight surface. We further demonstrate that various gross weight compositions can produce the same predictive state, suggesting the analogy of weight space as a 'patchwork' of multiple predictive states. Our results argue for increased inclusion of stochastic training replication and analysis into ecological and behavioural applications of artificial neural networks. 相似文献