首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
The ability to associate some stimuli while differentiating between others is an essential characteristic of biological memory. Theoretical models identify memories as attractors of neural network activity, with learning based on Hebb-like synaptic modifications. Our analysis shows that when network inputs are correlated, this mechanism results in overassociations, even up to several memories "merging" into one. To counteract this tendency, we introduce a learning mechanism that involves novelty-facilitated modifications, accentuating synaptic changes proportionally to the difference between network input and stored memories. This mechanism introduces a dependency of synaptic modifications on previously acquired memories, enabling a wide spectrum of memory associations, ranging from absolute discrimination to complete merging. The model predicts that memory representations should be sensitive to learning order, consistent with recent psychophysical studies of face recognition and electrophysiological experiments on hippocampal place cells. The proposed mechanism is compatible with a recent biological model of novelty-facilitated learning in hippocampal circuitry.  相似文献   

2.
Maintaining memories by reactivation   总被引:3,自引:0,他引:3  
According to a widely held concept, the formation of long-term memories relies on a reactivation and redistribution of newly acquired memory representations from temporary storage to neuronal networks supporting long-term storage. Here, we review evidence showing that this process of system consolidation takes place preferentially during sleep as an 'off-line' period during which memories are spontaneously reactivated and redistributed in the absence of interfering external inputs. Moreover, postlearning sleep leads to a reorganization of neuronal representations and qualitative changes of memory content. We propose that memory reactivations during sleep are accompanied by a transient destabilization of memory traces. Unlike wake reactivations that form part of an updating of memories with respect to current perceptual input, reactivations during sleep allow for gradually adapting newly acquired memories to pre-existing long-term memories whereby invariants and certain other features of these memories become extracted.  相似文献   

3.
In the last decades a standard model regarding the function of the hippocampus in memory formation has been established and tested computationally. It has been argued that the CA3 region works as an auto-associative memory and that its recurrent fibers are the actual storing place of the memories. Furthermore, to work properly CA3 requires memory patterns that are mutually uncorrelated. It has been suggested that the dentate gyrus orthogonalizes the patterns before storage, a process known as pattern separation. In this study we review the model when random input patterns are presented for storage and investigate whether it is capable of storing patterns of more realistic entorhinal grid cell input. Surprisingly, we find that an auto-associative CA3 net is redundant for random inputs up to moderate noise levels and is only beneficial at high noise levels. When grid cell input is presented, auto-association is even harmful for memory performance at all levels. Furthermore, we find that Hebbian learning in the dentate gyrus does not support its function as a pattern separator. These findings challenge the standard framework and support an alternative view where the simpler EC-CA1-EC network is sufficient for memory storage.  相似文献   

4.
Several reports have shown that after specific reminders are presented, consolidated memories pass from a stable state to one in which the memory is reactivated. This reactivation implies that memories are labile and susceptible to amnesic agents. This susceptibility decreases over time and leads to a re-stabilization phase usually known as reconsolidation. With respect to the biological role of reconsolidation, two functions have been proposed. First, the reconsolidation process allows new information to be integrated into the background of the original memory; second, it strengthens the original memory. We have previously demonstrated that both of these functions occur in the reconsolidation of human declarative memories. Our paradigm consisted of learning verbal material (lists of five pairs of nonsense syllables) acquired by a training process (L1-training) on Day 1 of our experiment. After this declarative memory is consolidated, it can be made labile by presenting a specific reminder. After this, the memory passes through a subsequent stabilization process. Strengthening creates a new scenario for the reconsolidation process; this function represents a new factor that may transform the dynamic of memories. First, we analyzed whether the repeated labilization-reconsolidation processes maintained the memory for longer periods of time. We showed that at least one labilization-reconsolidation process strengthens a memory via evaluation 5 days after its re-stabilization. We also demonstrated that this effect is not triggered by retrieval only. We then analyzed the way strengthening modified the effect of an amnesic agent that was presented immediately after repeated labilizations. The repeated labilization-reconsolidation processes made the memory more resistant to interference during re-stabilization. Finally, we evaluated whether the effect of strengthening may depend on the age of the memory. We found that the effect of strengthening did depend on the age of the memory. Forgetting may represent a process that weakens the effect of strengthening.  相似文献   

5.
在人脑的某些功能和神经系统中的突前抑制机制启发下,本文提出一个新型的神经网络模型——条件联想神经网络.模型是一个有突触前抑制的联想记忆神经网络.通过初步分析和计算机模拟,证明本模型具有一般联想记忆模型所未有的一些新的特性,如可以在不同条件下,对同一输入有不同的反应.对同一输入,在不同的条件下,又可以有相同的反应.这些特点将有助于人们对神经系统中信息处理过程的了解.此外,文中也指出可能实现本模型的神经结构.  相似文献   

6.
Kurikawa T  Kaneko K 《PloS one》2011,6(3):e17432
Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided.  相似文献   

7.
Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via γ-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).  相似文献   

8.
Although already William James and, more explicitly, Donald Hebb''s theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are “potential synapses” defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the “effectual network connectivity”, that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.  相似文献   

9.
The learning mechanism in the hippocampus has almost universally been assumed to be Hebbian in nature, where individual neurons in an engram join together with synaptic weight increases to support facilitated recall of memories later. However, it is also widely known that Hebbian learning mechanisms impose significant capacity constraints, and are generally less computationally powerful than learning mechanisms that take advantage of error signals. We show that the differential phase relationships of hippocampal subfields within the overall theta rhythm enable a powerful form of error-driven learning, which results in significantly greater capacity, as shown in computer simulations. In one phase of the theta cycle, the bidirectional connectivity between CA1 and entorhinal cortex can be trained in an error-driven fashion to learn to effectively encode the cortical inputs in a compact and sparse form over CA1. In a subsequent portion of the theta cycle, the system attempts to recall an existing memory, via the pathway from entorhinal cortex to CA3 and CA1. Finally the full theta cycle completes when a strong target encoding representation of the current input is imposed onto the CA1 via direct projections from entorhinal cortex. The difference between this target encoding and the attempted recall of the same representation on CA1 constitutes an error signal that can drive the learning of CA3 to CA1 synapses. This CA3 to CA1 pathway is critical for enabling full reinstatement of recalled hippocampal memories out in cortex. Taken together, these new learning dynamics enable a much more robust, high-capacity model of hippocampal learning than was available previously under the classical Hebbian model.  相似文献   

10.
The basal nucleus of the amygdala (BA) is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS)-related input from the adjacent lateral nucleus (LA) and contextual input from the hippocampus or medial prefrontal cortex (mPFC). We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA) thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories.  相似文献   

11.
In manufacturing monoclonal antibodies (mAbs), it is crucial to be able to predict how process conditions and supplements affect productivity and quality attributes, especially glycosylation. Supplemental inputs, such as amino acids and trace metals in the media, are reported to affect cell metabolism and glycosylation; quantifying their effects is essential for effective process development. We aim to present and validate, through a commercially relevant cell culture process, a technique for modeling such effects efficiently. While existing models can predict mAb production or glycosylation dynamics under specific process configurations, adapting them to new processes remains challenging, because it involves modifying the model structure and often requires some mechanistic understanding. Here, a modular modeling technique for adapting an existing model for a fed-batch Chinese hamster ovary (CHO) cell culture process without structural modifications or mechanistic insight is presented. Instead, data is used, obtained from designed experimental perturbations in media supplementation, to train and validate a supplemental input effect model, which is used to “patch” the existing model. The combined model can be used for model-based process development to improve productivity and to meet product quality targets more efficiently. The methodology and analysis are generally applicable to other CHO cell lines and cell types.  相似文献   

12.
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.  相似文献   

13.
The requirement that memory be superposition-free imposes a restrictive condition on the structure of neural models. A simple model which satisfies this condition consists of two levels of neurons, with primary level neurons firing in response to external input and secondary (reference) neurons modifying activated primaries in such a way that they can later fire these primaries. Reference neurons are either activated by other reference neurons (time-ordered memories) or by primaries (content-ordered memories). The model allows for general powers of memory manipulation (e.g. formation of associative memories) through rememorization and has a number of implications which correspond to features of the brain.  相似文献   

14.
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.  相似文献   

15.
Models of circuit action in the mammalian hippocampus have led us to a study of habituation circuits. In order to help model the process of habituation we consider here a memory network designed to learn sequences of inputs separated by various time intervals and to repeat these sequences when cued by their initial portions. The structure of the memory is based on the anatomy of the dentate gyrus region of the mammalian hippocampus. The model consists of a number of arrays of cells called lamellae. Each array consists of four lines of model cells coupled uniformly to neighbors within the array and with some randomness to cells in other lamellae. All model cells operate according to first-order differential equations. Two of the lines of cells in each lamella are coupled such that sufficient excitation by a system input generates a wave of activity that travels down the lamella. Such waves effect dynamic storage of the representation of each input, allowing association connections to form that code both the set of cells stimulated by each input and the time interval between successive inputs. Results of simulation of two networks are presented illustrating the model's operating characteristics and memory capacity.  相似文献   

16.
In the hippocampus, episodic memories are thought to be encoded by the formation of ensembles of synaptically coupled CA3 pyramidal cells driven by sparse but powerful mossy fiber inputs from dentate gyrus granule cells. The neuromodulators acetylcholine and noradrenaline are separately proposed as saliency signals that dictate memory encoding but it is not known if they represent distinct signals with separate mechanisms. Here, we show experimentally that acetylcholine, and to a lesser extent noradrenaline, suppress feed-forward inhibition and enhance Excitatory–Inhibitory ratio in the mossy fiber pathway but CA3 recurrent network properties are only altered by acetylcholine. We explore the implications of these findings on CA3 ensemble formation using a hierarchy of models. In reconstructions of CA3 pyramidal cells, mossy fiber pathway disinhibition facilitates postsynaptic dendritic depolarization known to be required for synaptic plasticity at CA3-CA3 recurrent synapses. We further show in a spiking neural network model of CA3 how acetylcholine-specific network alterations can drive rapid overlapping ensemble formation. Thus, through these distinct sets of mechanisms, acetylcholine and noradrenaline facilitate the formation of neuronal ensembles in CA3 that encode salient episodic memories in the hippocampus but acetylcholine selectively enhances the density of memory storage.  相似文献   

17.
Albright TD 《Neuron》2012,74(2):227-245
Perception is influenced both by the immediate pattern of sensory inputs and by memories acquired through prior experiences with the world. Throughout much of its illustrious history, however, study of the cellular basis of perception has focused on neuronal structures and events that underlie the detection and discrimination of sensory stimuli. Relatively little attention has been paid to the means by which memories interact with incoming sensory signals. Building upon recent neurophysiological/behavioral studies of the cortical substrates of visual associative memory, I propose a specific functional process by which stored information about the world supplements sensory inputs to yield neuronal signals that can account for visual perceptual experience. This perspective represents a significant shift in the way we think about the cellular bases of perception.  相似文献   

18.
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.  相似文献   

19.
Learning new facts and skills in succession can be frustrating because no sooner has new knowledge been acquired than its retention is being jeopardized by learning another set of skills or facts. Interference between memories has recently provided important new insights into the neural and psychological systems responsible for memory processing. For example, interference not only occurs between the same types of memories, but can also occur between different types of memories, which has important implications for our understanding of memory organization. Converging evidence has begun to reveal that the brain produces interference independently from other aspects of memory processing, which suggests that interference may have an important but previously overlooked function. A memory's initial susceptibility to interference and subsequent resistance to interference after its acquisition has revealed that memories continue to be processed 'off-line' during consolidation. Recent work has demonstrated that off-line processing is not limited to just the stabilization of a memory, which was once the defining characteristic of consolidation; instead, off-line processing can have a rich diversity of effects, from enhancing performance to making hidden rules explicit. Off-line processing also occurs after memory retrieval when memories are destabilized and then subsequently restabalized during reconsolidation. Studies are beginning to reveal the function of reconsolidation, its mechanistic relationship to consolidation and its potential as a therapeutic target for the modification of memories.  相似文献   

20.
On the dynamics of operant conditioning   总被引:1,自引:0,他引:1  
Simple psychological postulates are presented which are used to derive possible anatomical and physiological substrates of operant conditioning. These substrates are compatible with much psychological data about operants. A main theme is that aspects of operant and respondent conditioning share a single learning process. Among the phenomena which arise are the following: UCS-activated arousal; formation of conditioned, or secondary, reinforcers; a non-specific arousal system distinct from sensory and motor representations whose activation is required for sensory processing; polyvalent cells responsive to the sum of CS and UCS inputs and anodal d.c. potential shifts; neural loci responsive to the combined effect of sensory events and drive deprivation; “go”-like or “now print”-like mechanisms which, for example, influence incentive-motivational increases in general activity; a mechanism for learning repetitively to press a bar which electrically stimulates suitable arousal loci in the absence of drive reduction; uniformly distributed potentials, driven by the CS, in the “cerebral cortex” of a trained network; the distinction between short-term and long-term memory, and the possibility of eliminating transfer from short-term to long-term memory in the absence of suitable arousal; networks that can learn and perform arbitrarily complex sequences of acts or sensory memories, without continuous control by sensory feedback, whose rate of performance can be regulated by the level of internal arousal; networks with idetic memory; network analogs of “therapeutic resistance” and “repression”; the possibility of conditioning the sensory feedback created by a motor act to the neural controls of this act, with consequences for sensory-motor adaptation and child development. This paper introduces explicit minimal anatomies and physiological rules that formally give rise to analogous phenomena. These networks consider only aspects of positive conditioning. They are derived from simple psychological facts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号