首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A system with some degree of biological plausibility is developed to categorise items from a widely used machine learning benchmark. The system uses fatiguing leaky integrate and fire neurons, a relatively coarse point model that roughly duplicates biological spiking properties; this allows spontaneous firing based on hypo-fatigue so that neurons not directly stimulated by the environment may be included in the circuit. A novel compensatory Hebbian learning algorithm is used that considers the total synaptic weight coming into a neuron. The network is unsupervised and entirely self-organising. This is relatively effective as a machine learning algorithm, categorising with just neurons, and the performance is comparable with a Kohonen map. However the learning algorithm is not stable, and behaviour decays as length of training increases. Variables including learning rate, inhibition and topology are explored leading to stable systems driven by the environment. The model is thus a reasonable next step toward a full neural memory model.  相似文献   

2.
Recent physiological findings have revealed that long-term adaptation of the synaptic strengths between cortical pyramidal neurons depends on the temporal order of presynaptic and postsynaptic spikes, which is called spike-timing-dependent plasticity (STDP) or temporally asymmetric Hebbian (TAH) learning. Here I prove by analytical means that a physiologically plausible variant of STDP adapts synaptic strengths such that the presynaptic spikes predict the postsynaptic spikes with minimal error. This prediction error model of STDP implies a mechanism for cortical memory: cortical tissue learns temporal spike patterns if these spike patterns are repeatedly elicited in a set of pyramidal neurons. The trained network finishes these patterns if their beginnings are presented, thereby recalling the memory. Implementations of the proposed algorithms may be useful for applications in voice recognition and computer vision.  相似文献   

3.
An unsupervised neural network is proposed to learn and recall complex robot trajectories. Two cases are considered: (i) A single trajectory in which a particular arm configuration (state) may occur more than once, and (ii) trajectories sharing states with each other. Ambiguities occur in both cases during recall of such trajectories. The proposed model consists of two groups of synaptic weights trained by competitive and Hebbian learning laws. They are responsible for encoding spatial and temporal features of the input sequences, respectively. Three mechanisms allow the network to deal with repeated or shared states: local and global context units, neurons disabled from learning, and redundancy. The network reproduces the current and the next state of the learned sequences and is able to resolve ambiguities. The model was simulated over various sets of robot trajectories in order to evaluate learning and recall, trajectory sampling effects and robustness.  相似文献   

4.
Kim Y  Wood J  Moghaddam B 《PloS one》2012,7(1):e29766
Our understanding of how value-related information is encoded in the ventral tegmental area (VTA) is based mainly on the responses of individual putative dopamine neurons. In contrast to cortical areas, the nature of coordinated interactions between groups of VTA neurons during motivated behavior is largely unknown. These interactions can strongly affect information processing, highlighting the importance of investigating network level activity. We recorded the activity of multiple single units and local field potentials (LFP) in the VTA during a task in which rats learned to associate novel stimuli with different outcomes. We found that coordinated activity of VTA units with either putative dopamine or GABA waveforms was influenced differently by rewarding versus aversive outcomes. Specifically, after learning, stimuli paired with a rewarding outcome increased the correlation in activity levels between unit pairs whereas stimuli paired with an aversive outcome decreased the correlation. Paired single unit responses also became more redundant after learning. These response patterns flexibly tracked the reversal of contingencies, suggesting that learning is associated with changing correlations and enhanced functional connectivity between VTA neurons. Analysis of LFP recorded simultaneously with unit activity showed an increase in the power of theta oscillations when stimuli predicted reward but not an aversive outcome. With learning, a higher proportion of putative GABA units were phase locked to the theta oscillations than putative dopamine units. These patterns also adapted when task contingencies were changed. Taken together, these data demonstrate that VTA neurons organize flexibly as functional networks to support appetitive and aversive learning.  相似文献   

5.
An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards.  相似文献   

6.
The evolutionary selection circuits model of learning has been specified algorithmically. The basic structural components of the selection circuits model are enzymatic neurons, that is, neurons whose firing behavior is controlled by membrane-bound macromolecules called excitases. Learning involves changes in the excitase contents of neurons through a process of variation and selection. In this paper we report on the behavior of a basic version of the learning algorithm which has been developed through extensive interactive experiments with the model. This algorithm is effective in that it enables single neurons or networks of neurons to learn simple pattern classification tasks in a number of time steps which appears experimentally to be a linear function of problem size, as measured by the number of patterns of presynaptic input. The experimental behavior of the algorithm establishes that evolutionary mechanisms of learning are competent to serve as major mechanisms of neuronal adaptation. As an example, we show how the evolutionary learning algorithm can contribute to adaptive motor control processes in which the learning system develops the ability to reach a target in the presence of randomly imposed disturbances.  相似文献   

7.
Neural learning algorithms generally involve a number of identical processing units, which are fully or partially connected, and involve an update function, such as a ramp, a sigmoid or a Gaussian function for instance. Some variations also exist, where units can be heterogeneous, or where an alternative update technique is employed, such as a pulse stream generator. Associated with connections are numerical values that must be adjusted using a learning rule, and and dictated by parameters that are learning rule specific, such as momentum, a learning rate, a temperature, amongst others. Usually, neural learning algorithms involve local updates, and a global interaction between units is often discouraged, except in instances where units are fully connected, or involve synchronous updates. In all of these instances, concurrency within a neural algorithm cannot be fully exploited without a suitable implementation strategy. A design scheme is described for translating a neural learning algorithm from inception to implementation on a parallel machine using PVM or MPI libraries, or onto programmable logic such as FPGAs. A designer must first describe the algorithm using a specialised Neural Language, from which a Petri net (PN) model is constructed automatically for verification, and building a performance model. The PN model can be used to study issues such as synchronisation points, resource sharing and concurrency within a learning rule. Specialised constructs are provided to enable a designer to express various aspects of a learning rule, such as the number and connectivity of neural nodes, the interconnection strategies, and information flows required by the learning algorithm. A scheduling and mapping strategy is then used to translate this PN model onto a multiprocessor template. We demonstrate our technique using a Kohonen and backpropagation learning rules, implemented on a loosely coupled workstation cluster, and a dedicated parallel machine, with PVM libraries.  相似文献   

8.
Perceptual learning of visual features occurs when multiple stimuli are presented in a fixed sequence (temporal patterning), but not when they are presented in random order (roving). This points to the need for proper stimulus coding in order for learning of multiple stimuli to occur. We examined the stimulus coding rules for learning with multiple stimuli. Our results demonstrate that: (1) stimulus rhythm is necessary for temporal patterning to take effect during practice; (2) learning consolidation is subject to disruption by roving up to 4 h after each practice session; (3) importantly, after completion of temporal-patterned learning, performance is undisrupted by extended roving training; (4) roving is ineffective if each stimulus is presented for five or more consecutive trials; and (5) roving is also ineffective if each stimulus has a distinct identity. We propose that for multi-stimulus learning to occur, the brain needs to conceptually “tag” each stimulus, in order to switch attention to the appropriate perceptual template. Stimulus temporal patterning assists in tagging stimuli and switching attention through its rhythmic stimulus sequence.  相似文献   

9.
The brain performs various cognitive functions by learning the spatiotemporal salient features of the environment. This learning requires unsupervised segmentation of hierarchically organized spike sequences, but the underlying neural mechanism is only poorly understood. Here, we show that a recurrent gated network of neurons with dendrites can efficiently solve difficult segmentation tasks. In this model, multiplicative recurrent connections learn a context-dependent gating of dendro-somatic information transfers to minimize error in the prediction of somatic responses by the dendrites. Consequently, these connections filter the redundant input features represented by the dendrites but unnecessary in the given context. The model was tested on both synthetic and real neural data. In particular, the model was successful for segmenting multiple cell assemblies repeating in large-scale calcium imaging data containing thousands of cortical neurons. Our results suggest that recurrent gating of dendro-somatic signal transfers is crucial for cortical learning of context-dependent segmentation tasks.  相似文献   

10.
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics.  相似文献   

11.
Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed.  相似文献   

12.
The neural basis of perceptual learning   总被引:20,自引:0,他引:20  
Gilbert CD  Sigman M  Crist RE 《Neuron》2001,31(5):681-697
Perceptual learning is a lifelong process. We begin by encoding information about the basic structure of the natural world and continue to assimilate information about specific patterns with which we become familiar. The specificity of the learning suggests that all areas of the cerebral cortex are plastic and can represent various aspects of learned information. The neural substrate of perceptual learning relates to the nature of the neural code itself, including changes in cortical maps, in the temporal characteristics of neuronal responses, and in modulation of contextual influences. Top-down control of these representations suggests that learning involves an interaction between multiple cortical areas.  相似文献   

13.
We derive generalized spin models for the development of feedforward cortical architecture from a Hebbian synaptic learning rule in a two layer neural network with nonlinear weight constraints. Our model takes into account the effects of lateral interactions in visual cortex combining local excitation and long range effective inhibition. Our approach allows the principled derivation of developmental rules for low-dimensional feature maps, starting from high-dimensional synaptic learning rules. We incorporate the effects of smooth nonlinear constraints on net synaptic weight projected from units in the thalamic layer (the fan-out) and on the net synaptic weight received by units in the cortical layer (the fan-in). These constraints naturally couple together multiple feature maps such as orientation preference and retinotopic organization. We give a detailed illustration of the method applied to the development of the orientation preference map as a special case, in addition to deriving a model for joint pattern formation in cortical maps of orientation preference, retinotopic location, and receptive field width. We show that the combination of Hebbian learning and center-surround cortical interaction naturally leads to an orientation map development model that is closely related to the XY magnetic lattice model from statistical physics. The results presented here provide justification for phenomenological models studied in Cowan and Friedman (Advances in neural information processing systems 3, 1991), Thomas and Cowan (Phys Rev Lett 92(18):e188101, 2004) and provide a developmental model realizing the synaptic weight constraints previously assumed in Thomas and Cowan (Math Med Biol 23(2):119–138, 2006).  相似文献   

14.
Perceptual learning has been used to probe the mechanisms of cortical plasticity in the adult brain. Feedback projections are ubiquitous in the cortex, but little is known about their role in cortical plasticity. Here we explore the hypothesis that learning visual orientation discrimination involves learning-dependent plasticity of top-down feedback inputs from higher cortical areas, serving a different function from plasticity due to changes in recurrent connections within a cortical area. In a Hodgkin-Huxley-based spiking neural network model of visual cortex, we show that modulation of feedback inputs to V1 from higher cortical areas results in shunting inhibition in V1 neurons, which changes the response properties of V1 neurons. The orientation selectivity of V1 neurons is enhanced without changing orientation preference, preserving the topographic organizations in V1. These results provide new insights to the mechanisms of plasticity in the adult brain, reconciling apparently inconsistent experiments and providing a new hypothesis for a functional role of the feedback connections.  相似文献   

15.
Cellular mechanisms underlying synaptic plasticity are in line with the Hebbian concept. In contrast, data linking Hebbian learning to altered perception are rare. Combining functional magnetic resonance imaging with psychophysical tests, we studied cortical reorganization in primary and secondary somatosensory cortex (SI and SII) and the resulting changes of tactile perception before and after tactile coactivation, a simple type of Hebbian learning. Coactivation on the right index finger (IF) for 3 hr lowered its spatial discrimination threshold. In parallel, blood-oxygen level-dependent (BOLD) signals from the right IF representation in SI and SII enlarged. The individual threshold reduction was linearly correlated with the enlargement in SI, implying a close relation between altered discrimination and cortical reorganization. Controls consisting of a single-site stimulation did not affect thresholds and cortical maps. Accordingly, changes within distributed cortical networks based on Hebbian mechanisms alter the individual percept.  相似文献   

16.
We statistically characterize the population spiking activity obtained from simultaneous recordings of neurons across all layers of a cortical microcolumn. Three types of models are compared: an Ising model which captures pairwise correlations between units, a Restricted Boltzmann Machine (RBM) which allows for modeling of higher-order correlations, and a semi-Restricted Boltzmann Machine which is a combination of Ising and RBM models. Model parameters were estimated in a fast and efficient manner using minimum probability flow, and log likelihoods were compared using annealed importance sampling. The higher-order models reveal localized activity patterns which reflect the laminar organization of neurons within a cortical column. The higher-order models also outperformed the Ising model in log-likelihood: On populations of 20 cells, the RBM had 10% higher log-likelihood (relative to an independent model) than a pairwise model, increasing to 45% gain in a larger network with 100 spatiotemporal elements, consisting of 10 neurons over 10 time steps. We further removed the need to model stimulus-induced correlations by incorporating a peri-stimulus time histogram term, in which case the higher order models continued to perform best. These results demonstrate the importance of higher-order interactions to describe the structure of correlated activity in cortical networks. Boltzmann Machines with hidden units provide a succinct and effective way to capture these dependencies without increasing the difficulty of model estimation and evaluation.  相似文献   

17.
There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell’s pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.  相似文献   

18.
We investigated the roles of feedback and attention in training a vernier discrimination task as an example of perceptual learning. Human learning even of simple stimuli, such as verniers, relies on more complex mechanisms than previously expected – ruling out simple neural network models. These findings are not just an empirical oddity but are evidence that present models fail to reflect some important characteristics of the learning process. We will list some of the problems of neural networks and develop a new model that solves them by incorporating top-down mechanisms. Contrary to neural networks, in our model learning is not driven by the set of stimuli only. Internal estimations of performance and knowledge about the task are also incorporated. Our model implies that under certain conditions the detectability of only some of the stimuli is enhanced while the overall improvement of performance is attributed to a change of decision criteria. An experiment confirms this prediction. Received: 23 May 1996 / Accepted in revised form: 16 October 1997  相似文献   

19.
The interplay between hippocampus and prefrontal cortex (PFC) is fundamental to spatial cognition. Complementing hippocampal place coding, prefrontal representations provide more abstract and hierarchically organized memories suitable for decision making. We model a prefrontal network mediating distributed information processing for spatial learning and action planning. Specific connectivity and synaptic adaptation principles shape the recurrent dynamics of the network arranged in cortical minicolumns. We show how the PFC columnar organization is suitable for learning sparse topological-metrical representations from redundant hippocampal inputs. The recurrent nature of the network supports multilevel spatial processing, allowing structural features of the environment to be encoded. An activation diffusion mechanism spreads the neural activity through the column population leading to trajectory planning. The model provides a functional framework for interpreting the activity of PFC neurons recorded during navigation tasks. We illustrate the link from single unit activity to behavioral responses. The results suggest plausible neural mechanisms subserving the cognitive "insight" capability originally attributed to rodents by Tolman & Honzik. Our time course analysis of neural responses shows how the interaction between hippocampus and PFC can yield the encoding of manifold information pertinent to spatial planning, including prospective coding and distance-to-goal correlates.  相似文献   

20.
In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade correlation learning algorithm can be found. Furthermore, it becomes apparent that the cascade correlation scheme is a special case from mathematical analysis in which an efficient hardware learning algorithm called Cascade Error Projection(CEP) is proposed. The CEP provides efficient learning in hardware and it is faster to train, because part of the weights are deterministically obtained, and the learning of the remaining weights from the inputs to the hidden unit is performed as a single-layer perceptron learning with previously determined weights kept frozen. In addition, one can start out with zero weight values (rather than random finite weight values) when the learning of each layer is commenced. Further, unlike cascade correlation algorithm (where a pool of candidate hidden units is added), only a single hidden unit is added at a time. Therefore, the simplicity in hardware implementation is also achieved. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号