首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Long-term memories are likely stored in the synaptic weights of neuronal networks in the brain. The storage capacity of such networks depends on the degree of plasticity of their synapses. Highly plastic synapses allow for strong memories, but these are quickly overwritten. On the other hand, less labile synapses result in long-lasting but weak memories. Here we show that the trade-off between memory strength and memory lifetime can be overcome by partitioning the memory system into multiple regions characterized by different levels of synaptic plasticity and transferring memory information from the more to less plastic region. The improvement in memory lifetime is proportional to the number of memory regions, and the initial memory strength can be orders of magnitude larger than in a non-partitioned memory system. This model provides a fundamental computational reason for memory consolidation processes at the systems level.  相似文献   

2.
Although already William James and, more explicitly, Donald Hebb''s theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are “potential synapses” defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the “effectual network connectivity”, that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.  相似文献   

3.
Fusi S  Drew PJ  Abbott LF 《Neuron》2005,45(4):599-611
Storing memories of ongoing, everyday experiences requires a high degree of plasticity, but retaining these memories demands protection against changes induced by further activity and experience. Models in which memories are stored through switch-like transitions in synaptic efficacy are good at storing but bad at retaining memories if these transitions are likely, and they are poor at storage but good at retention if they are unlikely. We construct and study a model in which each synapse has a cascade of states with different levels of plasticity, connected by metaplastic transitions. This cascade model combines high levels of memory storage with long retention times and significantly outperforms alternative models. As a result, we suggest that memory storage requires synapses with multiple states exhibiting dynamics over a wide range of timescales, and we suggest experimental tests of this hypothesis.  相似文献   

4.
A long-standing problem is how memories can be stored for very long times despite the volatility of the underlying neural substrate, most notably the high turnover of dendritic spines and synapses. To address this problem, here we are using a generic and simple probabilistic model for the creation and removal of synapses. We show that information can be stored for several months when utilizing the intrinsic dynamics of multi-synapse connections. In such systems, single synapses can still show high turnover, which enables fast learning of new information, but this will not perturb prior stored information (slow forgetting), which is represented by the compound state of the connections. The model matches the time course of recent experimental spine data during learning and memory in mice supporting the assumption of multi-synapse connections as the basis for long-term storage.  相似文献   

5.
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.  相似文献   

6.
Experimental investigations have revealed that synapses possess interesting and, in some cases, unexpected properties. We propose a theoretical framework that accounts for three of these properties: typical central synapses are noisy, the distribution of synaptic weights among central synapses is wide, and synaptic connectivity between neurons is sparse. We also comment on the possibility that synaptic weights may vary in discrete steps. Our approach is based on maximizing information storage capacity of neural tissue under resource constraints. Based on previous experimental and theoretical work, we use volume as a limited resource and utilize the empirical relationship between volume and synaptic weight. Solutions of our constrained optimization problems are not only consistent with existing experimental measurements but also make nontrivial predictions.  相似文献   

7.
Two facts about the hippocampus have been common currency among neuroscientists for several decades. First, lesions of the hippocampus in humans prevent the acquisition of new episodic memories; second, activity-dependent synaptic plasticity is a prominent feature of hippocampal synapses. Given this background, the hypothesis that hippocampus-dependent memory is mediated, at least in part, by hippocampal synaptic plasticity has seemed as cogent in theory as it has been difficult to prove in practice. Here we argue that the recent development of transgenic molecular devices will encourage a shift from mechanistic investigations of synaptic plasticity in single neurons towards an analysis of how networks of neurons encode and represent memory, and we suggest ways in which this might be achieved. In the process, the hypothesis that synaptic plasticity is necessary and sufficient for information storage in the brain may finally be validated.  相似文献   

8.
The information storing capacity of certain associative and auto-associative memories is calculated. For example, in a 100×100 matrix of 1 bit storage elements more than 6,500 bits can be stored associatively, and more than 688,000 bits in a 1,000×1,000 matrix. Asymptotically, the storage capacity of an associative memory increases proportionally to the number of storage elements. The usefulness of associative memories, as opposed to conventional listing memories, is discussed — especially in connection with brain modelling.  相似文献   

9.
Working memory (WM) is the part of the brain''s memory system that provides temporary storage and manipulation of information necessary for cognition. Although WM has limited capacity at any given time, it has vast memory content in the sense that it acts on the brain''s nearly infinite repertoire of lifetime long-term memories. Using simulations, we show that large memory content and WM functionality emerge spontaneously if we take the spike-timing nature of neuronal processing into account. Here, memories are represented by extensively overlapping groups of neurons that exhibit stereotypical time-locked spatiotemporal spike-timing patterns, called polychronous patterns; and synapses forming such polychronous neuronal groups (PNGs) are subject to associative synaptic plasticity in the form of both long-term and short-term spike-timing dependent plasticity. While long-term potentiation is essential in PNG formation, we show how short-term plasticity can temporarily strengthen the synapses of selected PNGs and lead to an increase in the spontaneous reactivation rate of these PNGs. This increased reactivation rate, consistent with in vivo recordings during WM tasks, results in high interspike interval variability and irregular, yet systematically changing, elevated firing rate profiles within the neurons of the selected PNGs. Additionally, our theory explains the relationship between such slowly changing firing rates and precisely timed spikes, and it reveals a novel relationship between WM and the perception of time on the order of seconds.  相似文献   

10.
11.
Memory reconsolidation is a central process enabling adaptive memory and the perception of a constantly changing reality. It causes memories to be strengthened, weakened or changed following their recall. A computational model of memory reconsolidation is presented. Unlike Hopfield-type memory models, our model introduces an unbounded number of attractors that are updatable and can process real-valued, large, realistic stimuli. Our model replicates three characteristic effects of the reconsolidation process on human memory: increased association, extinction of fear memories, and the ability to track and follow gradually changing objects. In addition to this behavioral validation, a continuous time version of the reconsolidation model is introduced. This version extends average rate dynamic models of brain circuits exhibiting persistent activity to include adaptivity and an unbounded number of attractors.  相似文献   

12.
Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a circuit-based explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.  相似文献   

13.
Smolen P 《PloS one》2007,2(5):e445
Late long-term potentiation (L-LTP) denotes long-lasting strengthening of synapses between neurons. L-LTP appears essential for the formation of long-term memory, with memories at least partly encoded by patterns of strengthened synapses. How memories are preserved for months or years, despite molecular turnover, is not well understood. Ongoing recurrent neuronal activity, during memory recall or during sleep, has been hypothesized to preferentially potentiate strong synapses, preserving memories. This hypothesis has not been evaluated in the context of a mathematical model representing ongoing activity and biochemical pathways important for L-LTP. In this study, ongoing activity was incorporated into two such models - a reduced model that represents some of the essential biochemical processes, and a more detailed published model. The reduced model represents synaptic tagging and gene induction simply and intuitively, and the detailed model adds activation of essential kinases by Ca(2+). Ongoing activity was modeled as continual brief elevations of Ca(2+). In each model, two stable states of synaptic strength/weight resulted. Positive feedback between synaptic weight and the amplitude of ongoing Ca(2+) transients underlies this bistability. A tetanic or theta-burst stimulus switches a model synapse from a low basal weight to a high weight that is stabilized by ongoing activity. Bistability was robust to parameter variations in both models. Simulations illustrated that prolonged periods of decreased activity reset synaptic strengths to low values, suggesting a plausible forgetting mechanism. However, episodic activity with shorter inactive intervals maintained strong synapses. Both models support experimental predictions. Tests of these predictions are expected to further understanding of how neuronal activity is coupled to maintenance of synaptic strength. Further investigations that examine the dynamics of activity and synaptic maintenance can be expected to help in understanding how memories are preserved for up to a lifetime in animals including humans.  相似文献   

14.
Accurate models of synaptic plasticity are essential to understand the adaptive properties of the nervous system and for realistic models of learning and memory. Experiments have shown that synaptic plasticity depends not only on pre- and post-synaptic activity patterns, but also on the strength of the connection itself. Namely, weaker synapses are more easily strengthened than already strong ones. This so called soft-bound plasticity automatically constrains the synaptic strengths. It is known that this has important consequences for the dynamics of plasticity and the synaptic weight distribution, but its impact on information storage is unknown. In this modeling study we introduce an information theoretic framework to analyse memory storage in an online learning setting. We show that soft-bound plasticity increases a variety of performance criteria by about 18% over hard-bound plasticity, and likely maximizes the storage capacity of synapses.  相似文献   

15.
This letter presents an experimental study that shows that a 3(rd) physical dimension may be used to further increase information packing density in magnetic storage devices. We demonstrate the feasibility of at least quadrupling the magnetic states of magnetic-based data storage devices by recording and reading information from nanopillars with three magnetically-decoupled layers. Magneto-optical Kerr effect microscopy and magnetic force microscopy analysis show that both continuous (thin film) and patterned triple-stack magnetic media can generate eight magnetically-stable states. This is in comparison to only two states in conventional magnetic recording. Our work further reveals that ferromagnetic interaction between magnetic layers can be reduced by combining Co/Pt and Co/Pd multilayers media. Finally, we are showing for the first time an MFM image of multilevel-3D bit patterned media with 8 discrete signal levels.  相似文献   

16.
Theories of neural coding seek to explain how states of the world are mapped onto states of the brain. Here, we compare how an animal''s location in space can be encoded by two different kinds of brain states: population vectors stored by patterns of neural firing rates, versus synchronization vectors stored by patterns of synchrony among neural oscillators. It has previously been shown that a population code stored by spatially tuned ‘grid cells’ can exhibit desirable properties such as high storage capacity and strong fault tolerance; here it is shown that similar properties are attainable with a synchronization code stored by rhythmically bursting ‘theta cells’ that lack spatial tuning. Simulations of a ring attractor network composed from theta cells suggest how a synchronization code might be implemented using fewer neurons and synapses than a population code with similar storage capacity. It is conjectured that reciprocal connections between grid and theta cells might control phase noise to correct two kinds of errors that can arise in the code: path integration and teleportation errors. Based upon these analyses, it is proposed that a primary function of spatially tuned neurons might be to couple the phases of neural oscillators in a manner that allows them to encode spatial locations as patterns of neural synchrony.  相似文献   

17.
Our daily experiences and learnings are stored in the form of memories. These experiences trigger synaptic plasticity and persistent structural and functional changes in neuronal synapses. Recently, cellular studies of memory storage and engrams have emerged over the last decade. Engram cells reflect interconnected neurons via modified synapses. However, we were unable to observe the structural changes arising from synaptic plasticity in the past, because it was not possible to distinguish the synapses between engram cells. To overcome this barrier, dual-eGRASP (enhanced green fluorescent protein reconstitution across synaptic partners) technology can label specific synapses among multiple synaptic ensembles. Selective labeling of engram synapses elucidated their role by observing the structural changes in synapses according to the memory state. Dual-eGRASP extends cellular level engram studies to introduce the era of synaptic level studies. Here, we review this concept and possible applications of the dual-eGRASP, including recent studies that provided visual evidence of structural plasticity at the engram synapse.  相似文献   

18.
Persistent neural activity is observed in many systems, and is thought to be a neural substrate for holding memories over time delays of a few seconds. Recent work has addressed two issues. First, how can networks of neurons robustly hold such an active memory? Computer systems obtain significant robustness to noise by approximating analogue quantities with discrete digital representations. In a similar manner, theoretical models of persistent activity in spiking neurons have shown that the most robust and stable way to store the short-term memory of a continuous parameter is to approximate it with a discrete representation. This general idea applies very broadly to mechanisms that range from biochemical networks to single cells and to large circuits of neurons. Second, why is it commonly observed that persistent activity in the cortex can be strongly time-varying? This observation is almost ubiquitous, and therefore must be taken into account in our models and our understanding of how short-term memories are held in the cortex.  相似文献   

19.
Molecular switches have been implicated in the storage of information in biological systems. For small structures such as synapses, these switches are composed of only a few molecules and stochastic fluctuations are therefore of importance. Such fluctuations could potentially lead to spontaneous switch reset that would limit the lifetime of information storage. We have analyzed a model of the calcium/calmodulin-dependent protein kinase II (CaMKII) switch implicated in long-term memory in the nervous system. The bistability of this switch arises from autocatalytic autophosphorylation of CaMKII, a reaction that is countered by a saturable phosphatase-1-mediated dephosphorylation. We sought to understand the factors that control switch stability and to determine the functional relationship between stability and the number of molecules involved. Using Monte Carlo simulations, we found that the lifetime of states of the switch increase exponentially with the number of CaMKII holoenzymes. Switch stability requires a balance between the kinase and phosphatase rates, and the kinase rate must remain high relative to the rate of protein turnover. Thus, a critical limit on switch stability is set by the observed turnover rate (one per 30 h on average). Our computational results show that, depending on the timescale of fluctuations in enzyme numbers, for a switch composed of about 15 CaMKII holoenzymes, the stable persistent activation can span from a few years to a human lifetime.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号