首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions.  相似文献   

2.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

3.
Rose M  Haider H  Weiller C  Büchel C 《Neuron》2002,36(6):1221-1231
The medial temporal lobe (MTL) has been associated with declarative learning of flexible relational rules and the basal ganglia with implicit learning of stimulus-response mappings. It remains an open question of whether MTL or basal ganglia are involved when learning flexible relational contingencies without awareness. We studied learning of an explicit stimulus-response association with fMRI. Embedded in this explicit task was a hidden structure that was learnt implicitly. Implicit learning of the sequential regularities of the "hidden rule" activated the ventral perirhinal cortex, within the MTL, whereas learning the fixed stimulus-response associations activated the basal ganglia, indicating that the function of the MTL and the basal ganglia depends on the learned material and not necessarily on the participants' awareness.  相似文献   

4.
The complexity of biological neural networks does not allow to directly relate their biophysical properties to the dynamics of their electrical activity. We present a reservoir computing approach for functionally identifying a biological neural network, i.e. for building an artificial system that is functionally equivalent to the reference biological network. Employing feed-forward and recurrent networks with fading memory, i.e. reservoirs, we propose a point process based learning algorithm to train the internal parameters of the reservoir and the connectivity between the reservoir and the memoryless readout neurons. Specifically, the model is an Echo State Network (ESN) with leaky integrator neurons, whose individual leakage time constants are also adapted. The proposed ESN algorithm learns a predictive model of stimulus-response relations in in vitro and simulated networks, i.e. it models their response dynamics. Receiver Operating Characteristic (ROC) curve analysis indicates that these ESNs can imitate the response signal of a reference biological network. Reservoir adaptation improved the performance of an ESN over readout-only training methods in many cases. This also held for adaptive feed-forward reservoirs, which had no recurrent dynamics. We demonstrate the predictive power of these ESNs on various tasks with cultured and simulated biological neural networks.  相似文献   

5.
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.  相似文献   

6.
Adrenal corticosteroid hormones act via mineralocorticoid (MR) and glucocorticoid receptors (GR) in the brain, influencing learning and memory. MRs have been implicated in the initial behavioral response in novel situations, which includes behavioral strategies in learning tasks. Different strategies can be used to solve navigational tasks, for example hippocampus-dependent spatial or striatum-dependent stimulus-response strategies. Previous studies suggested that MRs are involved in spatial learning and induce a shift between learning strategies when animals are allowed a choice between both strategies. In the present study, we further explored the role of MRs in spatial and stimulus-response learning in two separate circular holeboard tasks using female mice with forebrain-specific MR deficiency and MR overexpression and their wildtype control littermates. In addition, we studied sex-specific effects using male and female MR-deficient mice. First, we found that MR-deficient compared to control littermates and MR-overexpressing mice display altered exploratory and searching behavior indicative of impaired acquisition of novel information. Second, female (but not male) MR-deficient mice were impaired in the spatial task, while MR-overexpressing female mice showed improved performance in the spatial task. Third, MR-deficient mice were also impaired in the stimulus-response task compared to controls and (in the case of females) MR-overexpressing mice. We conclude that MRs are important for coordinating the processing of information relevant for spatial as well as stimulus-response learning.  相似文献   

7.
Learning is often understood as an organism''s gradual acquisition of the association between a given sensory stimulus and the correct motor response. Mathematically, this corresponds to regressing a mapping between the set of observations and the set of actions. Recently, however, it has been shown both in cognitive and motor neuroscience that humans are not only able to learn particular stimulus-response mappings, but are also able to extract abstract structural invariants that facilitate generalization to novel tasks. Here we show how such structure learning can enhance facilitation in a sensorimotor association task performed by human subjects. Using regression and reinforcement learning models we show that the observed facilitation cannot be explained by these basic models of learning stimulus-response associations. We show, however, that the observed data can be explained by a hierarchical Bayesian model that performs structure learning. In line with previous results from cognitive tasks, this suggests that hierarchical Bayesian inference might provide a common framework to explain both the learning of specific stimulus-response associations and the learning of abstract structures that are shared by different task environments.  相似文献   

8.
In this paper a new learning rule for the coupling weights tuning of Hopfield like chaotic neural networks is developed in such a way that all neurons behave in a synchronous manner, while the desirable structure of the network is preserved during the learning process. The proposed learning rule is based on sufficient synchronization criteria, on the eigenvalues of the weight matrix belonging to the neural network and the idea of Structured Inverse Eigenvalue Problem. Our developed learning rule not only synchronizes all neuron’s outputs with each other in a desirable topology, but also enables us to enhance the synchronizability of the networks by choosing the appropriate set of weight matrix eigenvalues. Specifically, this method is evaluated by performing simulations on the scale-free topology.  相似文献   

9.
We study asymmetric stochastic networks from two points of view: combinatorial optimization and learning algorithms based on relative entropy minimization. We show that there are non trivial classes of asymmetric networks which admit a Lyapunov function under deterministic parallel evolution and prove that the stochastic augmentation of such networks amounts to a stochastic search for global minima of . The problem of minimizing for a totally antisymmetric parallel network is shown to be associated to an NP-complete decision problem. The study of entropic learning for general asymmetric networks, performed in the non equilibrium, time dependent formalism, leads to a Hebbian rule based on time averages over the past history of the system. The general algorithm for asymmetric networks is tested on a feed-forward architecture.This research was supported in part by C.N.R. under grants 88.03556.12 and 89.05261.CT12  相似文献   

10.
Antzoulatos EG  Miller EK 《Neuron》2011,71(2):243-249
Learning to classify diverse experiences into meaningful groups, like categories, is fundamental to normal cognition. To understand its neural basis, we simultaneously recorded from multiple electrodes in lateral prefrontal cortex and dorsal striatum, two interconnected brain structures critical for learning. Each day, monkeys learned to associate novel abstract, dot-based categories with a right versus left saccade. Early on, when they could acquire specific stimulus-response associations, striatum activity was an earlier predictor of the corresponding saccade. However, as the number of exemplars increased and monkeys had to learn to classify them, PFC activity began to predict the saccade associated with each category before the striatum. While monkeys were categorizing novel exemplars at a high rate, PFC activity was a strong predictor of their corresponding saccade early in the trial before the striatal neurons. These results suggest that striatum plays a greater role in stimulus-response association and PFC in abstraction of categories.  相似文献   

11.
B. Doyon 《Acta biotheoretica》1992,40(2-3):113-119
Chaos theory is a rapidly growing field. As a technical term, “chaos” refers to deterministic but unpredictable processes being sensitively dependent upon initial conditions. Neurobiological models and experimental results are very complicated and some research groups have tried to pursue the “neuronal chaos”. Babloyantz's group has studied the fractal dimension (d) of electroencephalograms (EEG) in various physiological and pathological states. From deep sleep (d=4) to full awakening (d>8), a hierarchy of “strange” attractors paralles the hierarchy of states of consciousness. In epilepsy (petit mal), despite the turbulent aspect of a seizure, the attractor dimension was near to 2. In Creutzfeld-Jacob disease, the regular EEG activity corresponded to an attractor dimension less than the one measured in deep sleep. Is it healthy to be chaotic? An “active desynchronisation” could be favourable to a physiological system. Rapp's group reported variations of fractal dimension according to particular tasks. During a mental arithmetic task, this dimension increased. In another task, a P300 fractal index decreased when a target was identified. It is clear that the EEG is not representing noise. Its underlying dynamics depends on only a few degrees of freedom despite yet it is difficult to compute accurately the relevant parameters. What is the cognitive role of such a chaotic dynamics? Freeman has studied the olfactory bulb in rabbits and rats for 15 years. Multi-electrode recordings of a few mm2 showed a chaotic hierarchy from deep anaesthesia to alert state. When an animal identified a previously learned odour, the fractal dimension of the dynamics dropped off (near limit cycles). The chaotic activity corresponding to an alert-and-waiting state seems to be a field of all possibilities and a focused activity corresponds to a reduction of the attractor in state space. For a couple of years, Freeman has developed a model of the olfactory bulb-cortex system. The behaviour of the simple model “without learning” was quite similar to the real behaviour and a model “with learning” is developed. Recently, more and more authors insisted on the importance of the dynamic aspect of nervous functioning in cognitive modelling. Most of the models in the neural-network field are designed to converge to a stable state (fixed point) because such behaviour is easy to understand and to control. However, some theoretical studies in physics try to understand how a chaotic behaviour can emerge from neural networks. Sompolinsky's group showed that a sharp transition from a stable state to a chaotic state occurred in totally interconnected networks depending on the value of one control parameter. Learning in such systems is an open field. In conclusion, chaos does exist in neurophysiological processes. It is neither a kind of noise nor a pathological sign. Its main role could be to provide diversity and flexibility to physiological processes. Could “strange” attractors in nervous system embody mental forms? This is a difficult but fascinating question.  相似文献   

12.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

13.
Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, “memories-as-bifurcations,” that differs from the traditional “memories-as-attractors” viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple Hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in our previous study.  相似文献   

14.
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.  相似文献   

15.
We propose a general framework for integrating theory and empiricism in human evolutionary ecology. We specifically emphasize the joint use of stochastic nonlinear dynamics and information theory. To illustrate critical ideas associated with historical contingency and complex dynamics, we review recent research on social preferences and social learning from behavioral economics. We additionally examine recent work on ecological approaches in history, the modeling of chaotic populations, and statistical application of information theory.  相似文献   

16.
Can noise induce chaos?   总被引:5,自引:0,他引:5  
An important component of the mathematical definition of chaos is sensitivity to initial conditions. Sensitivity to initial conditions is usually measured in a deterministic model by the dominant Lyapunov exponent (LE), with chaos indicated by a positive LE. The sensitivity measure has been extended to stochastic models; however, it is possible for the stochastic Lyapunov exponent (SLE) to be positive when the LE of the underlying deterministic model is negative, and vice versa. This occurs because the LE is a long-term average over the deterministic attractor while the SLE is the long-term average over the stationary probability distribution. The property of sensitivity to initial conditions, uniquely associated with chaotic dynamics in deterministic systems, is widespread in stochastic systems because of time spent near repelling invariant sets (such as unstable equilibria and unstable cycles). Such sensitivity is due to a mechanism fundamentally different from deterministic chaos. Positive SLE's should therefore not be viewed as a hallmark of chaos. We develop examples of ecological population models in which contradictory LE and SLE values lead to confusion about whether or not the population fluctuations are primarily the result of chaotic dynamics. We suggest that "chaos" should retain its deterministic definition in light of the origins and spirit of the topic in ecology. While a stochastic system cannot then strictly be chaotic, chaotic dynamics can be revealed in stochastic systems through the strong influence of underlying deterministic chaotic invariant sets.  相似文献   

17.
The efficiency of various patterns of pulsatile stimulation is determined in a model in which a receptor becomes desensitized in the presence of its stimulatory ligand. The effect of stochastic or chaotic changes in the duration and/or interval between successive pulses in a series of square-wave stimuli is investigated. Before addressing the effect of random variations in the pulsatile signal, we first extend the results of a previous analysis (Li, Y.X., and A. Goldbeter. 1989. Biophys. J. 55:125-145) by demonstrating the existence of an optimal periodic signal that maximizes target cell responsiveness whatever the magnitude of stimulation. As to the effect of stochastic or chaotic variations in the pulsatile stimulus, three kinds of random distributions are used, namely, a Gaussian and a white-noise distribution, and a chaotic time series generated by the logistic map. All these random distributions are symmetrically centered around the reference value of the duration or interval that characterizes the optimal periodic stimulus yielding maximal responsiveness in target cells. Stochastically or chaotically varying pulses are less effective than the periodic signal that corresponds to the optimal pattern of pulsatile stimulation. The response of the receptor system is most sensitive to changes in the time interval that separates successive stimuli. Similar conclusions hold when stochastic or chaotic signals are compared to a reference periodic stimulus differing from the optimal one, although the effect of random variations is then reduced. The decreased efficiency of stochastic pulses accounts for the observed superiority of periodic versus stochastic pulses of cyclic AMP (cAMP) in Dictyostelium amoebae. The results are also discussed with respect to the efficiency of periodic versus stochastic or chaotic patterns of hormone secretion.  相似文献   

18.
文章揭示了外界周期脉冲激励下神经元系统产生的随机整数倍和混沌多峰放电节律的关系.随机节律统计直方图呈多峰分布、峰值指数衰减、不可预报且复杂度接近1;混沌节律统计直方图呈不同的多峰分布,峰值非指数衰减、有一定的可预报性且复杂度小于1.混沌节律在激励脉冲周期小于系统内在周期且刺激强度较大时产生,参数范围较小;而随机节律在激励脉冲周期大于系统内在周期且脉冲刺激强度小时,可与随机因素共同作用而产生,产生的参数范围较大.上述结果揭示了两类节律的动力学特性,为区分两类节律提供了实用指标.  相似文献   

19.
Pancreatic beta-cells in an intact Islet of Langerhans exhibit bursting electrical behavior. The Chay-Keizer model describes this using a calcium-activated potassium (K-Ca) channel, but cannot account for the irregular spiking of isolated beta-cells. Atwater I., L. Rosario, and E. Rojas, Cell Calcium. 4:451-461, proposed that the K-Ca channels, which are rarely open, are shared by several cells. This suggests that the chaotic behavior of isolated cells is stochastic. We have revised the Chay-Keizer model to incorporate voltage clamp data of Rorsman and Trube and extended it to include stochastic K-Ca channels. This model can describe the behavior of single cells, as well as that of clusters of cells tightly coupled by gap junctions. As the size of the clusters is increased, the electrical activity shows a transition from chaotic spiking to regular bursting. Although the model of coupling is over-simplified, the simulations lend support to the hypothesis that bursting is the result of channel sharing.  相似文献   

20.
The implications of probabilistic secretion of quanta for the functioning of neural networks in the central nervous system have been explored. A model of stochastic secretion at synapses in simple networks, consisting of large numbers of granule cells and a relatively small number of inhibitory interneurons, has been analysed. Such networks occur in the input to the cerebellum Purkinje cells as well as to hippocampal CA3 pyramidal cells and to pyramidal cells in the visual cortex. In this model the input axons terminate on granule cells as well as on an inhibitory interneuron that projects to the granule cells. Stochastic secretion at these synapses involves both temporal variability in secretion at single synapses in the network as well as spatial variability in the secretion at different synapses. The role of this stochastic variability in controlling the size of the granule cell output to a level independent of the size of the input and in separating overlapping inputs has been determined analytically as well as by simulation. The regulation of granule-cell output activity to a reasonably constant value for different size inputs does not occur in the absence of an inhibitory interneuron when both spatial and temporal stochastic variability occurs at the remaining synapses; it is still very poor in the presence of such an interneuron but in the absence of stochastic variability. However, quite good regulation is achieved when the inhibitory interneuron is present with spatial and temporal stochastic variability of secretion at synapses in the network. Excellent regulation is achieved if, in addition, allowance is made for the nonlinear behaviour of the input-output characteristics of inhibitory interneurons. The capacity of granule-cell networks to separate overlapping patterns of activity on their inputs is adequate, with spatial variability in the secretion at synapses, but is improved if there is also temporal variability in the stochastic secretion at individual synapses, although this is at the expense of reliability in the network. Other factors which improve pattern separation are control of the output to very low activity levels, and a restriction on the cumulative size of the excitatory input terminals of each granule cell. Application of the theory to the input neural networks of the cerebellum and the hippocampus shows the role of stochastic variability in quantal transmission in determining the capacity of these networks for pattern separation and activity regulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号