首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.  相似文献   

2.
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.  相似文献   

3.
We derive a linear neural network model of the chemotaxis control circuit in the nematode Caenorhabditis elegans and demonstrate that this model is capable of producing nematodelike chemotaxis. By expanding the analytic solution for the network output in time-derivatives of the network input, we extract simple computational rules that reveal how the model network controls chemotaxis. Based on these rules we find that optimized linear networks typically control chemotaxis by computing the first time-derivative of the chemical concentration and modulating the body turning rate in response to this derivative. We argue that this is consistent with behavioral studies and a plausible mechanism for at least one component of chemotaxis in real nematodes.  相似文献   

4.
This article highlights specific features of biological neurons and their dendritic trees, whose adoption may help advance artificial neural networks used in various machine learning applications. Advancements could take the form of increased computational capabilities and/or reduced power consumption. Proposed features include dendritic anatomy, dendritic nonlinearities, and compartmentalized plasticity rules, all of which shape learning and information processing in biological networks. We discuss the computational benefits provided by these features in biological neurons and suggest ways to adopt them in artificial neurons in order to exploit the respective benefits in machine learning.  相似文献   

5.
We propose a working hypothesis supported by numerical simulations that brain networks evolve based on the principle of the maximization of their internal information flow capacity. We find that synchronous behavior and capacity of information flow of the evolved networks reproduce well the same behaviors observed in the brain dynamical networks of Caenorhabditis elegans and humans, networks of Hindmarsh-Rose neurons with graphs given by these brain networks. We make a strong case to verify our hypothesis by showing that the neural networks with the closest graph distance to the brain networks of Caenorhabditis elegans and humans are the Hindmarsh-Rose neural networks evolved with coupling strengths that maximize information flow capacity. Surprisingly, we find that global neural synchronization levels decrease during brain evolution, reflecting on an underlying global no Hebbian-like evolution process, which is driven by no Hebbian-like learning behaviors for some of the clusters during evolution, and Hebbian-like learning rules for clusters where neurons increase their synchronization.  相似文献   

6.
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.  相似文献   

7.
The properties due to the location of neurons, synapses, and possibly even synaptic channels, in neuron networks are still unknown. Our preliminary results suggest that not only the interconnections but also the relative positions of the different elements in the network are of importance in the learning process in the cerebellar cortex. We have used neural field equations to investigate the mechanisms of learning in the hierarchical neural network. The numerical resolution of these equations reveals two important properties: (i) The hierarchical structure of this network has the expected effect on learning because the flow of information at the neuronal level is controlled by the heterosynaptic effect through the synaptic density-connectivity function, i.e. the action potential field variable is controlled by the synaptic efficacy field variable at different points of the neuron. (ii) The geometry of the system involves different velocities of propagation along different fibers, i.e. different delays between cells, and thus has a stabilizing effect on the dynamics, allowing the Purkinje output to reach a given value. The field model proposed should be useful in the study of the spatial properties of hierarchical biological systems.  相似文献   

8.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

9.
In this paper, we propose a genetic algorithm based design procedure for a multi layer feed forward neural network. A hierarchical genetic algorithm is used to evolve both the neural networks topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi layer Perceptron networks and radial basis function networks. Based upon the chosen cost function, a linear weight combination decision making approach has been applied to derive an approximated Pareto optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two objective optimization problem.  相似文献   

10.
Yuste R 《Neuron》2011,71(5):772-781
Dendritic spines receive most excitatory connections in pyramidal cells and many other principal neurons. But why do neurons use spines, when they could accommodate excitatory contacts directly on their dendritic shafts? One suggestion is that spines serve to connect with passing axons, thus increasing the connectivity of the dendrites. Another hypothesis is that spines are biochemical compartments that enable input-specific synaptic plasticity. A third possibility is that spines have an electrical role, filtering synaptic potentials and electrically isolating inputs from each other. In this review, I argue that, when viewed from the perspective of the circuit function, these three functions dovetail with one another to achieve a single overarching goal: to implement a distributed circuit with widespread connectivity. Spines would endow these circuits with nonsaturating, linear integration and input-specific learning rules, which would enable them to function as neural networks, with emergent encoding and processing of information.  相似文献   

11.
Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this "temporal stability" or "slowness" approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing-dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the "trace rule." The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.  相似文献   

12.
We treated the interactions between two nonequilibrium neural networks, each of which possesses memories that are different from those of the other. In this respect, we developed a kind of hetero interaction that is a crucial ingredient for assuring communication.We propose a new learning algorithm for assuring different neural activity in both the maintenance of own memories and the learning of other memories (which are different from own memories). We call it novelty-induced learning.  相似文献   

13.
Towards an artificial brain   总被引:2,自引:1,他引:1  
M Conrad  R R Kampfner  K G Kirby  E N Rizki  G Schleis  R Smalz  R Trenary 《Bio Systems》1989,23(2-3):175-215; discussion 216-8
Three components of a brain model operating on neuromolecular computing principles are described. The first component comprises neurons whose input-output behavior is controlled by significant internal dynamics. Models of discrete enzymatic neurons, reaction-diffusion neurons operating on the basis of the cyclic nucleotide cascade, and neurons controlled by cytoskeletal dynamics are described. The second component of the model is an evolutionary learning algorithm which is used to mold the behavior of enzyme-driven neurons or small networks of these neurons for specific function, usually pattern recognition or target seeking tasks. The evolutionary learning algorithm may be interpreted either as representing the mechanism of variation and natural selection acting on a phylogenetic time scale, or as a conceivable ontogenetic adaptation mechanism. The third component of the model is a memory manipulation scheme, called the reference neuron scheme. In principle it is capable of orchestrating a repertoire of enzyme-driven neurons for coherent function. The existing implementations, however, utilize simple neurons without internal dynamics. Spatial navigation and simple game playing (using tic-tac-toe) provide the task environments that have been used to study the properties of the reference neuron model. A memory-based evolutionary learning algorithm has been developed that can assign credit to the individual neurons in a network. It has been run on standard benchmark tasks, and appears to be quite effective both for conventional neural nets and for networks of discrete enzymatic neurons. The models have the character of artificial worlds in that they map the hierarchy of processes in the brain (at the molecular, neuronal, and network levels), provide a task environment, and use this relatively self-contained setup to develop and evaluate learning and adaptation algorithms.  相似文献   

14.
Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this “temporal stability” or “slowness” approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing–dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the “trace rule.” The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.  相似文献   

15.
RV Florian 《PloS one》2012,7(8):e40233
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.  相似文献   

16.
The brain’s activity is characterized by the interaction of a very large number of neurons that are strongly affected by noise. However, signals often arise at macroscopic scales integrating the effect of many neurons into a reliable pattern of activity. In order to study such large neuronal assemblies, one is often led to derive mean-field limits summarizing the effect of the interaction of a large number of neurons into an effective signal. Classical mean-field approaches consider the evolution of a deterministic variable, the mean activity, thus neglecting the stochastic nature of neural behavior. In this article, we build upon two recent approaches that include correlations and higher order moments in mean-field equations, and study how these stochastic effects influence the solutions of the mean-field equations, both in the limit of an infinite number of neurons and for large yet finite networks. We introduce a new model, the infinite model, which arises from both equations by a rescaling of the variables and, which is invertible for finite-size networks, and hence, provides equivalent equations to those previously derived models. The study of this model allows us to understand qualitative behavior of such large-scale networks. We show that, though the solutions of the deterministic mean-field equation constitute uncorrelated solutions of the new mean-field equations, the stability properties of limit cycles are modified by the presence of correlations, and additional non-trivial behaviors including periodic orbits appear when there were none in the mean field. The origin of all these behaviors is then explored in finite-size networks where interesting mesoscopic scale effects appear. This study leads us to show that the infinite-size system appears as a singular limit of the network equations, and for any finite network, the system will differ from the infinite system.  相似文献   

17.
In this paper, an online self-organizing scheme for Parsimonious and Accurate Fuzzy Neural Networks (PAFNN), and a novel structure learning algorithm incorporating a pruning strategy into novel growth criteria are presented. The proposed growing procedure without pruning not only simplifies the online learning process but also facilitates the formation of a more parsimonious fuzzy neural network. By virtue of optimal parameter identification, high performance and accuracy can be obtained. The learning phase of the PAFNN involves two stages, namely structure learning and parameter learning. In structure learning, the PAFNN starts with no hidden neurons and parsimoniously generates new hidden units according to the proposed growth criteria as learning proceeds. In parameter learning, parameters in premises and consequents of fuzzy rules, regardless of whether they are newly created or already in existence, are updated by the extended Kalman filter (EKF) method and the linear least squares (LLS) algorithm, respectively. This parameter adjustment paradigm enables optimization of parameters in each learning epoch so that high performance can be achieved. The effectiveness and superiority of the PAFNN paradigm are demonstrated by comparing the proposed method with state-of-the-art methods. Simulation results on various benchmark problems in the areas of function approximation, nonlinear dynamic system identification and chaotic time-series prediction demonstrate that the proposed PAFNN algorithm can achieve more parsimonious network structure, higher approximation accuracy and better generalization simultaneously.  相似文献   

18.
A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.  相似文献   

19.
The vestibuloocular reflex and other oculomotor functions are subserved by populations of neurons operating in parallel. This distributed aspect of the system's organization has been largely ignored in previous block diagram models. Neurons that transmit oculomotor signals, such as those in the vestibular nucleus (VN), actually combine the different types of signals in a diverse, seemingly random way that could not be predicted from a block diagram. We used the backpropagation learning algorithm to program distributed neural-network models of the vestibulo-oculomotor system. Networks were trained to combine vestibular, pursuit and saccadic eye velocity command signals. The model neurons in these neural networks have diverse combinations of vestibulo-oculomotor signals that are qualitatively similar to those reported for actual VN neurons in the monkey. This similarity implicates a learning mechanism as an organizing influence on the vestibulo-oculomotor system and demonstrates how VN neurons can encode vestibulo-oculomotor signals in a diverse, distributed manner.  相似文献   

20.
Responses of multisensory neurons to combinations of sensory cues are generally enhanced or depressed relative to single cues presented alone, but the rules that govern these interactions have remained unclear. We examined integration of visual and vestibular self-motion cues in macaque area MSTd in response to unimodal as well as congruent and conflicting bimodal stimuli in order to evaluate hypothetical combination rules employed by multisensory neurons. Bimodal responses were well fit by weighted linear sums of unimodal responses, with weights typically less than one (subadditive). Surprisingly, our results indicate that weights change with the relative reliabilities of the two cues: visual weights decrease and vestibular weights increase when visual stimuli are degraded. Moreover, both modulation depth and neuronal discrimination thresholds improve for matched bimodal compared to unimodal stimuli, which might allow for increased neural sensitivity during multisensory stimulation. These findings establish important new constraints for neural models of cue integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号