首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

2.
We investigate an artificial neural network model with a modified Hebb rule. It is an auto-associative neural network similar to the Hopfield model and to the Willshaw model. It has properties of both of these models. Another property is that the patterns are sparsely coded and are stored in cycles of synchronous neural activities. The cycles of activity for some ranges of parameter increase the capacity of the model. We discuss basic properties of the model and some of the implementation issues, namely optimizing of the algorithms. We describe the modification of the Hebb learning rule, the learning algorithm, the generation of patterns, decomposition of patterns into cycles and pattern recall.  相似文献   

3.
Long-term modification of synaptic strength is thought to be the basic mechanism underlying the activity-dependent refinement of neural circuits and the formation of memories engrammed on them. Studies ranging from cell culture preparations to humans subjects indicate that the decision of whether a synapse will undergo strengthening or weakening critically depends on the temporal order of presynaptic and postsynaptic activity. At many synapses, potentiation will be induced only when the presynaptic neuron fires an action potential within milliseconds before the postsynaptic neuron fires, whereas weakening will occur when it is the postsynaptic neuron that fires first. Such processes might be important for the remodeling of neural circuits by activity during development and for network functions such as sequence learning and prediction. Ultimately, this synaptic property might also be fundamental for the cognitive process by which we structure our experience through cause and effect relations.  相似文献   

4.
Synaptic plasticity is widely believed to constitute a key mechanism for modifying functional properties of neuronal networks. This belief implicitly implies, however, that synapses, when not driven to change their characteristics by physiologically relevant stimuli, will maintain these characteristics over time. How tenacious are synapses over behaviorally relevant time scales? To begin to address this question, we developed a system for continuously imaging the structural dynamics of individual synapses over many days, while recording network activity in the same preparations. We found that in spontaneously active networks, distributions of synaptic sizes were generally stable over days. Following individual synapses revealed, however, that the apparently static distributions were actually steady states of synapses exhibiting continual and extensive remodeling. In active networks, large synapses tended to grow smaller, whereas small synapses tended to grow larger, mainly during periods of particularly synchronous activity. Suppression of network activity only mildly affected the magnitude of synaptic remodeling, but dependence on synaptic size was lost, leading to the broadening of synaptic size distributions and increases in mean synaptic size. From the perspective of individual neurons, activity drove changes in the relative sizes of their excitatory inputs, but such changes continued, albeit at lower rates, even when network activity was blocked. Our findings show that activity strongly drives synaptic remodeling, but they also show that significant remodeling occurs spontaneously. Whereas such spontaneous remodeling provides an explanation for “synaptic homeostasis” like processes, it also raises significant questions concerning the reliability of individual synapses as sites for persistently modifying network function.  相似文献   

5.
Brunel N  Hakim V  Isope P  Nadal JP  Barbour B 《Neuron》2004,43(5):745-757
It is widely believed that synaptic modifications underlie learning and memory. However, few studies have examined what can be deduced about the learning process from the distribution of synaptic weights. We analyze the perceptron, a prototypical feedforward neural network, and obtain the optimal synaptic weight distribution for a perceptron with excitatory synapses. It contains more than 50% silent synapses, and this fraction increases with storage reliability: silent synapses are therefore a necessary byproduct of optimizing learning and reliability. Exploiting the classical analogy between the perceptron and the cerebellar Purkinje cell, we fitted the optimal weight distribution to that measured for granule cell-Purkinje cell synapses. The two distributions agreed well, suggesting that the Purkinje cell can learn up to 5 kilobytes of information, in the form of 40,000 input-output associations.  相似文献   

6.
In the developing hippocampus, functional excitatory synaptic connections seem to be recruited from a preformed, initially silent synaptic network. This functional synapse induction requires presynaptic action potentials paired with postsynaptic depolarization, thus obeying Hebb's rule of association. During early postnatal development the hippocampus exhibits an endogenous form of patterned neuronal activity that is driven by GABAergic depolarization. We propose that this recurrent activity promotes the input-specific induction of functional synapses in the CA1 region. Thus, activity-dependent synaptic reorganization in the developing hippocampus appears to be dominated by an active recruitment of new synapses rather than an active elimination of redundant connections.  相似文献   

7.
8.
Accurate models of synaptic plasticity are essential to understand the adaptive properties of the nervous system and for realistic models of learning and memory. Experiments have shown that synaptic plasticity depends not only on pre- and post-synaptic activity patterns, but also on the strength of the connection itself. Namely, weaker synapses are more easily strengthened than already strong ones. This so called soft-bound plasticity automatically constrains the synaptic strengths. It is known that this has important consequences for the dynamics of plasticity and the synaptic weight distribution, but its impact on information storage is unknown. In this modeling study we introduce an information theoretic framework to analyse memory storage in an online learning setting. We show that soft-bound plasticity increases a variety of performance criteria by about 18% over hard-bound plasticity, and likely maximizes the storage capacity of synapses.  相似文献   

9.
Neuronal networks can generate complex patterns of activity that depend on membrane properties of individual neurons as well as on functional synapses. To decipher the impact of synaptic properties and connectivity on neuronal network behavior, we investigate the responses of neuronal ensembles from small (5–30 cells in a restricted sphere) and large (acute hippocampal slice) networks to single electrical stimulation: in both cases, a single stimulus generated a synchronous long-lasting bursting activity. While an initial spike triggered a reverberating network activity that lasted 2–5 seconds for small networks, we found here that it lasted only up to 300 milliseconds in slices. To explain this phenomena present at different scales, we generalize the depression-facilitation model and extracted the network time constants. The model predicts that the reverberation time has a bell shaped relation with the synaptic density, revealing that the bursting time cannot exceed a maximum value. Furthermore, before reaching its maximum, the reverberation time increases sub-linearly with the synaptic density of the network. We conclude that synaptic dynamics and connectivity shape the mean burst duration, a property present at various scales of the networks. Thus bursting reverberation is a property of sufficiently connected neural networks, and can be generated by collective depression and facilitation of underlying functional synapses.  相似文献   

10.
Here we explore the possibility that a core function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single unifying computational framework.  相似文献   

11.
For long-lasting memory traces, structural synaptic changes remain a probable mechanism. However, in higher animals it has proved difficult to provide positive evidence for this notion. The main reason may be that the changes are subtle and are to be found in a relatively small subset of synapses and in a distributed manner in the cellular network in question. Here, we discuss possible strategies for finding structural changes in the hippocampus associated with spatial learning, an activity for which this structure is important. Spatial learning may induce new excitatory synapses in a small subset of hippocampal CA1 neurons because we observe a higher spine density without alteration in dendritic length or branching. The dendritic synapses are regularly spaced, irrespective of spine density, suggesting the operation of an intersynaptic dispersing force. © 1995 John Wiley & Sons, Inc.  相似文献   

12.
M Kaufman  MA Corner  NE Ziv 《PloS one》2012,7(7):e40980
Cholinergic neuromodulation plays key roles in the regulation of neuronal excitability, network activity, arousal, and behavior. On longer time scales, cholinergic systems play essential roles in cortical development, maturation, and plasticity. Presumably, these processes are associated with substantial synaptic remodeling, yet to date, long-term relationships between cholinergic tone and synaptic remodeling remain largely unknown. Here we used automated microscopy combined with multielectrode array recordings to study long-term relationships between cholinergic tone, excitatory synapse remodeling, and network activity characteristics in networks of cortical neurons grown on multielectrode array substrates. Experimental elevations of cholinergic tone led to the abrupt suppression of episodic synchronous bursting activity (but not of general activity), followed by a gradual growth of excitatory synapses over hours. Subsequent blockage of cholinergic receptors led to an immediate restoration of synchronous bursting and the gradual reversal of synaptic growth. Neither synaptic growth nor downsizing was governed by multiplicative scaling rules. Instead, these occurred in a subset of synapses, irrespective of initial synaptic size. Synaptic growth seemed to depend on intrinsic network activity, but not on the degree to which bursting was suppressed. Intriguingly, sustained elevations of cholinergic tone were associated with a gradual recovery of synchronous bursting but not with a reversal of synaptic growth. These findings show that cholinergic tone can strongly affect synaptic remodeling and synchronous bursting activity, but do not support a strict coupling between the two. Finally, the reemergence of synchronous bursting in the presence of elevated cholinergic tone indicates that the capacity of cholinergic neuromodulation to indefinitely suppress synchronous bursting might be inherently limited.  相似文献   

13.
14.
It is generally believed that associative memory in the brain depends on multistable synaptic dynamics, which enable the synapses to maintain their value for extended periods of time. However, multistable dynamics are not restricted to synapses. In particular, the dynamics of some genetic regulatory networks are multistable, raising the possibility that even single cells, in the absence of a nervous system, are capable of learning associations. Here we study a standard genetic regulatory network model with bistable elements and stochastic dynamics. We demonstrate that such a genetic regulatory network model is capable of learning multiple, general, overlapping associations. The capacity of the network, defined as the number of associations that can be simultaneously stored and retrieved, is proportional to the square root of the number of bistable elements in the genetic regulatory network. Moreover, we compute the capacity of a clonal population of cells, such as in a colony of bacteria or a tissue, to store associations. We show that even if the cells do not interact, the capacity of the population to store associations substantially exceeds that of a single cell and is proportional to the number of bistable elements. Thus, we show that even single cells are endowed with the computational power to learn associations, a power that is substantially enhanced when these cells form a population.  相似文献   

15.
Dendritic spines are the main postsynaptic site of excitatory contacts between neurons in the central nervous system. On cortical neurons, spines undergo a continuous turnover regulated by development and sensory activity. However, the functional implications of this synaptic remodeling for network properties remain currently unknown. Using repetitive confocal imaging on hippocampal organotypic cultures, we find that learning-related patterns of activity that induce long-term potentiation act as a selection mechanism for the stabilization and localization of spines. Through a lasting N-methyl-D-aspartate receptor and protein synthesis–dependent increase in protrusion growth and turnover, induction of plasticity promotes a pruning and replacement of nonactivated spines by new ones together with a selective stabilization of activated synapses. Furthermore, most newly formed spines preferentially grow in close proximity to activated synapses and become functional within 24 h, leading to a clustering of functional synapses. Our results indicate that synaptic remodeling associated with induction of long-term potentiation favors the selection of inputs showing spatiotemporal interactions on a given neuron.  相似文献   

16.
大脑神经回路高度有序的神经元活动是高级脑功能的基础,神经元之间的突触联结是神经回路的关键功能节点。神经突触根据神经元活动调整其传递效能的能力,亦即突触可塑性,被认为是神经回路发育和学习与记忆功能的基础。其异常则可能导致如抑郁症和阿尔茨海默病等精神、神经疾病。将介绍这两种疾病与突触可塑性的关系,聚焦于相关分子和细胞机制以及新的研究、治疗手段等进展。  相似文献   

17.
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.  相似文献   

18.
During brain development, before sensory systems become functional, neuronal networks spontaneously generate repetitive bursts of neuronal activity, which are typically synchronized across many neurons. Such activity patterns have been described on the level of networks and cells, but the fine-structure of inputs received by an individual neuron during spontaneous network activity has not been studied. Here, we used calcium imaging to record activity at many synapses of hippocampal pyramidal neurons simultaneously to establish the activity patterns in the majority of synapses of an entire cell. Analysis of the spatiotemporal patterns of synaptic activity revealed a fine-scale connectivity rule: neighboring synapses (<16?μm intersynapse distance) are more likely to be coactive than synapses that are farther away from each other. Blocking spiking activity or NMDA receptor activation revealed that the clustering of synaptic inputs required neuronal activity, demonstrating a role of developmentally expressed spontaneous activity for connecting neurons with subcellular precision.  相似文献   

19.
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L 1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum.  相似文献   

20.
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号