首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
During development, biological neural networks produce more synapses and neurons than needed. Many of these synapses and neurons are later removed in a process known as neural pruning. Why networks should initially be over-populated, and the processes that determine which synapses and neurons are ultimately pruned, remains unclear. We study the mechanisms and significance of neural pruning in model neural networks. In a deep Boltzmann machine model of sensory encoding, we find that (1) synaptic pruning is necessary to learn efficient network architectures that retain computationally-relevant connections, (2) pruning by synaptic weight alone does not optimize network size and (3) pruning based on a locally-available measure of importance based on Fisher information allows the network to identify structurally important vs. unimportant connections and neurons. This locally-available measure of importance has a biological interpretation in terms of the correlations between presynaptic and postsynaptic neurons, and implies an efficient activity-driven pruning rule. Overall, we show how local activity-dependent synaptic pruning can solve the global problem of optimizing a network architecture. We relate these findings to biology as follows: (I) Synaptic over-production is necessary for activity-dependent connectivity optimization. (II) In networks that have more neurons than needed, cells compete for activity, and only the most important and selective neurons are retained. (III) Cells may also be pruned due to a loss of synapses on their axons. This occurs when the information they convey is not relevant to the target population.  相似文献   

2.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

3.
Spike-timing-dependent synaptic plasticity (STDP) is a simple and effective learning rule for sequence learning. However, synapses being subject to STDP rules are readily influenced in noisy circumstances because synaptic conductances are modified by pre- and postsynaptic spikes elicited within a few tens of milliseconds, regardless of whether those spikes convey information or not. Noisy firing existing everywhere in the brain may induce irrelevant enhancement of synaptic connections through STDP rules and would result in uncertain memory encoding and obscure memory patterns. We will here show that the LTD windows of the STDP rules enable robust sequence learning amid background noise in cooperation with a large signal transmission delay between neurons and a theta rhythm, using a network model of the entorhinal cortex layer II with entorhinal-hippocampal loop connections. The important element of the present model for robust sequence learning amid background noise is the symmetric STDP rule having LTD windows on both sides of the LTP window, in addition to the loop connections having a large signal transmission delay and the theta rhythm pacing activities of stellate cells. Above all, the LTD window in the range of positive spike-timing is important to prevent influences of noise with the progress of sequence learning.  相似文献   

4.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.  相似文献   

5.
The review analyzes the fundamental problem of study of the neuronal mechanisms underlying processes of learning and memory. As a neuronal model of these phenomena there was considered one of the cellular phenomena that has characteristics similar with those in the process of “memorizing”—such as the long-term potentiation (LTP). LTP is easily reproduced in certain synapses of the central nervous system, specifically in synapses of hippocampus and amygdala. As the behavioral model of learning, there was used the conditioned learning, in frames of which production of the context-dependent/independent conditioned reaction was considered. Analysis of literature data showed that various stages of LTP produced on synapses of hippocampus or amygdala can be comparable with certain phases of the process of learning. Based on the exposed material the authors conclude that plastic changes of synapses of hippocampus and amygdala can represent the morphological substrate of some kinds of learning and memory.  相似文献   

6.
Almost all the information that is needed to specify thalamocortical and neocortical wiring derives from patterned electrical activity induced by the environment. Wiring accuracy must be limited by the anatomical specificity of the cascade of events triggered by neural activity and culminating in synaptogenesis. We present a simple model of learning in the presence of plasticity errors. One way to achieve learning specificity is to build better synapses. We discuss an alternative, circuit-based, approach that only allows plasticity at connections that support highly selective correlations. This circuit resembles some of the more puzzling aspects of thalamocorticothalamic circuitry.  相似文献   

7.
Long-term memories are likely stored in the synaptic weights of neuronal networks in the brain. The storage capacity of such networks depends on the degree of plasticity of their synapses. Highly plastic synapses allow for strong memories, but these are quickly overwritten. On the other hand, less labile synapses result in long-lasting but weak memories. Here we show that the trade-off between memory strength and memory lifetime can be overcome by partitioning the memory system into multiple regions characterized by different levels of synaptic plasticity and transferring memory information from the more to less plastic region. The improvement in memory lifetime is proportional to the number of memory regions, and the initial memory strength can be orders of magnitude larger than in a non-partitioned memory system. This model provides a fundamental computational reason for memory consolidation processes at the systems level.  相似文献   

8.
Jun JK  Jin DZ 《PloS one》2007,2(8):e723
Temporally precise sequences of neuronal spikes that span hundreds of milliseconds are observed in many brain areas, including songbird premotor nucleus, cat visual cortex, and primary motor cortex. Synfire chains-networks in which groups of neurons are connected via excitatory synapses into a unidirectional chain-are thought to underlie the generation of such sequences. It is unknown, however, how synfire chains can form in local neural circuits, especially for long chains. Here, we show through computer simulation that long synfire chains can develop through spike-time dependent synaptic plasticity and axon remodeling-the pruning of prolific weak connections that follows the emergence of a finite number of strong connections. The formation process begins with a random network. A subset of neurons, called training neurons, intermittently receive superthreshold external input. Gradually, a synfire chain emerges through a recruiting process, in which neurons within the network connect to the tail of the chain started by the training neurons. The model is robust to varying parameters, as well as natural events like neuronal turnover and massive lesions. Our model suggests that long synfire chain can form during the development through self-organization, and axon remodeling, ubiquitous in developing neural circuits, is essential in the process.  相似文献   

9.
It is generally believed that associative memory in the brain depends on multistable synaptic dynamics, which enable the synapses to maintain their value for extended periods of time. However, multistable dynamics are not restricted to synapses. In particular, the dynamics of some genetic regulatory networks are multistable, raising the possibility that even single cells, in the absence of a nervous system, are capable of learning associations. Here we study a standard genetic regulatory network model with bistable elements and stochastic dynamics. We demonstrate that such a genetic regulatory network model is capable of learning multiple, general, overlapping associations. The capacity of the network, defined as the number of associations that can be simultaneously stored and retrieved, is proportional to the square root of the number of bistable elements in the genetic regulatory network. Moreover, we compute the capacity of a clonal population of cells, such as in a colony of bacteria or a tissue, to store associations. We show that even if the cells do not interact, the capacity of the population to store associations substantially exceeds that of a single cell and is proportional to the number of bistable elements. Thus, we show that even single cells are endowed with the computational power to learn associations, a power that is substantially enhanced when these cells form a population.  相似文献   

10.
Taha S  Hanover JL  Silva AJ  Stryker MP 《Neuron》2002,36(3):483-491
Experience is a powerful sculptor of developing neural connections. In the primary visual cortex (V1), cortical connections are particularly susceptible to the effects of sensory manipulation during a postnatal critical period. At the molecular level, this activity-dependent plasticity requires the transformation of synaptic depolarization into changes in synaptic weight. The molecule alpha calcium-calmodulin kinase type II (alphaCaMKII) is known to play a central role in this transformation. Importantly, alphaCaMKII function is modulated by autophosphorylation, which promotes Ca(2+)-independent kinase activity. Here we show that mice possessing a mutant form of alphaCaMKII that is unable to autophosphorylate show impairments in ocular dominance plasticity. These results confirm the importance of alphaCaMKII in visual cortical plasticity and suggest that synaptic changes induced by monocular deprivation are stored specifically in glutamatergic synapses made onto excitatory neurons.  相似文献   

11.
Spike-timing-dependent plasticity (STDP) determines the evolution of the synaptic weights according to their pre- and post-synaptic activity, which in turn changes the neuronal activity on a (much) slower time scale. This paper examines the effect of STDP in a recurrently connected network stimulated by external pools of input spike trains, where both input and recurrent synapses are plastic. Our previously developed theoretical framework is extended to incorporate weight-dependent STDP and dendritic delays. The weight dynamics is determined by an interplay between the neuronal activation mechanisms, the input spike-time correlations, and the learning parameters. For the case of two external input pools, the resulting learning scheme can exhibit a symmetry breaking of the input connections such that two neuronal groups emerge, each specialized to one input pool only. In addition, we show how the recurrent connections within each neuronal group can be strengthened by STDP at the expense of those between the two groups. This neuronal self-organization can be seen as a basic dynamical ingredient for the emergence of neuronal maps induced by activity-dependent plasticity.  相似文献   

12.
Classic cadherins function as key organizers during the formation and remodeling of synapses in the vertebrate central nervous system. Cadherins are Ca2+-dependent homophilic adhesion molecules whose adhesive strength can be regulated by conformational changes, through cadherin's association with intracellular binding proteins, and by the regulation of cadherin turnover and internalization. In this mini-review, we will highlight recent studies on the role of cadherins and their associated partners in regulating synaptic architecture. Moreover, we will discuss molecular mechanisms underlying cadherin turnover and the subsequent impact on synaptic connections.  相似文献   

13.
A neuron model with the ability of learning has been examined by means of mathematical and statistical methods. By use of the established anatomical concepts the main features of the model can be described as follows.The synapses are randomly distributed on the dendrites in a way that can be described by poisson processes. The afferent connections to the synapses are also random.The input signals are divided into excitatory, inhibitory and unspecified signals. The latter, whose detailed action is not specified, may involve excitatory as well as inhibitory action on the cell. Signals are described in terms of impulse frequencies.Learning takes place through facilitation of excitatory synapses. The condition for facilitation is the occurrence of simultaneous presynaptic and postsynaptic activity. The synaptical changes occurring during repeated learning are superimposed. Inhibitory synapses are capable of influencing learning by blocking the dendritic transmission.It is shown that, under certain conditions, a collection of model cells is able to work as an associative memory. This means that a pattern of output signals that once occurred through the combined action of the excitatory, the inhibitory, and the unspecified signals may later be recalled by applying just the two former signal patterns. It is shown that excitatory and inhibitory signals are similar in their ability to evoke associations.However there is also a difference between excitation and inhibition due to the fact that the pattern of inhibitory signals is subject to a non-linear transformation. This implies that great similarity is required between the inhibitory pattern once present during learning and the inhibitory pattern that is fed in later in order to obtain an associative recall. This phenomenon is called pattern separation and is supposed to be of importance when discriminating between patterns.  相似文献   

14.
Synaptic clustering on neuronal dendrites has been hypothesized to play an important role in implementing pattern recognition. Neighboring synapses on a dendritic branch can interact in a synergistic, cooperative manner via nonlinear voltage-dependent mechanisms, such as NMDA receptors. Inspired by the NMDA receptor, the single-branch clusteron learning algorithm takes advantage of location-dependent multiplicative nonlinearities to solve classification tasks by randomly shuffling the locations of “under-performing” synapses on a model dendrite during learning (“structural plasticity”), eventually resulting in synapses with correlated activity being placed next to each other on the dendrite. We propose an alternative model, the gradient clusteron, or G-clusteron, which uses an analytically-derived gradient descent rule where synapses are "attracted to" or "repelled from" each other in an input- and location-dependent manner. We demonstrate the classification ability of this algorithm by testing it on the MNIST handwritten digit dataset and show that, when using a softmax activation function, the accuracy of the G-clusteron on the all-versus-all MNIST task (~85%) approaches that of logistic regression (~93%). In addition to the location update rule, we also derive a learning rule for the synaptic weights of the G-clusteron (“functional plasticity”) and show that a G-clusteron that utilizes the weight update rule can achieve ~89% accuracy on the MNIST task. We also show that a G-clusteron with both the weight and location update rules can learn to solve the XOR problem from arbitrary initial conditions.  相似文献   

15.
The study of experience-dependent plasticity has been dominated by questions of how Hebbian plasticity mechanisms act during learning and development. This is unsurprising as Hebbian plasticity constitutes the most fully developed and influential model of how information is stored in neural circuits and how neural circuitry can develop without extensive genetic instructions. Yet Hebbian plasticity may not be sufficient for understanding either learning or development: the dramatic changes in synapse number and strength that can be produced by this kind of plasticity tend to threaten the stability of neural circuits. Recent work has suggested that, in addition to Hebbian plasticity, homeostatic regulatory mechanisms are active in a variety of preparations. These mechanisms alter both the synaptic connections between neurons and the intrinsic electrical properties of individual neurons, in such a way as to maintain some constancy in neuronal properties despite the changes wrought by Hebbian mechanisms. Here we review the evidence for homeostatic plasticity in the central nervous system, with special emphasis on results from cortical preparations.  相似文献   

16.
The brain performs various cognitive functions by learning the spatiotemporal salient features of the environment. This learning requires unsupervised segmentation of hierarchically organized spike sequences, but the underlying neural mechanism is only poorly understood. Here, we show that a recurrent gated network of neurons with dendrites can efficiently solve difficult segmentation tasks. In this model, multiplicative recurrent connections learn a context-dependent gating of dendro-somatic information transfers to minimize error in the prediction of somatic responses by the dendrites. Consequently, these connections filter the redundant input features represented by the dendrites but unnecessary in the given context. The model was tested on both synthetic and real neural data. In particular, the model was successful for segmenting multiple cell assemblies repeating in large-scale calcium imaging data containing thousands of cortical neurons. Our results suggest that recurrent gating of dendro-somatic signal transfers is crucial for cortical learning of context-dependent segmentation tasks.  相似文献   

17.
The activity of the single synapse is the base of information processing and transmission in the brain as well as of important phenomena as the Long Term Potentiation which is the main mechanism for learning and memory. Although usually considered as independent events, the single quantum release gives variable postsynaptic responses which not only depend on the properties of the synapses but can be strongly influenced by the activity of other synapses. In the present paper we show the results of a series of computational experiments where pools of active synapses, in a compatible time window, influence the response of a single synapse of the considered pool. Moreover, our results show that the activity of the pool, by influencing the membrane potential, can be a significant factor in the NMDA unblocking from \(Mg^{2+}\) increasing the contribution of this receptor type to the Excitatory Post Synaptic Current. We consequently suggest that phenomena like the LTP, which depend on NMDA activation, can occur also in subthreshold conditions due to the integration of the dendritic synaptic activity.  相似文献   

18.
19.
Connectionist models of memory storage have been studied for many years, and aim to provide insight into potential mechanisms of memory storage by the brain. A problem faced by these systems is that as the number of items to be stored increases across a finite set of neurons/synapses, the cumulative changes in synaptic weight eventually lead to a sudden and dramatic loss of the stored information (catastrophic interference, CI) as the previous changes in synaptic weight are effectively lost. This effect does not occur in the brain, where information loss is gradual. Various attempts have been made to overcome the effects of CI, but these generally use schemes that impose restrictions on the system or its inputs rather than allowing the system to intrinsically cope with increasing storage demands. We show here that catastrophic interference occurs as a result of interference among patterns that lead to catastrophic effects when the number of patterns stored exceeds a critical limit. However, when Gram-Schmidt orthogonalization is combined with the Hebb-Hopfield model, the model attains the ability to eliminate CI. This approach differs from previous orthogonalisation schemes used in connectionist networks which essentially reflect sparse coding of the input. Here CI is avoided in a network of a fixed size without setting limits on the rate or number of patterns encoded, and without separating encoding and retrieval, thus offering the advantage of allowing associations between incoming and stored patterns.PACS Nos.: 87.10.+e, 87.18.Bb, 87.18.Sn, 87.19.La  相似文献   

20.
In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号