首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.  相似文献   

2.
The concept of reverberation proposed by Lorente de Nó and Hebb is key to understanding strongly recurrent cortical networks. In particular, synaptic reverberation is now viewed as a likely mechanism for the active maintenance of working memory in the prefrontal cortex. Theoretically, this has spurred a debate as to how such a potentially explosive mechanism can provide stable working-memory function given the synaptic and cellular mechanisms at play in the cerebral cortex. We present here new evidence for the participation of NMDA receptors in the stabilization of persistent delay activity in a biophysical network model of conductance-based neurons. We show that the stability of working-memory function, and the required NMDA/AMPA ratio at recurrent excitatory synapses, depend on physiological properties of neurons and synaptic interactions, such as the time constants of excitation and inhibition, mutual inhibition between interneurons, differential NMDA receptor participation at excitatory projections to pyramidal neurons and interneurons, or the presence of slow intrinsic ion currents in pyramidal neurons. We review other mechanisms proposed to enhance the dynamical stability of synaptically generated attractor states of a reverberatory circuit. This recent work represents a necessary and significant step towards testing attractor network models by cortical electrophysiology.  相似文献   

3.
Brain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storage is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. These memories are stored in a distributed fashion throughout connectivity matrix, and individual synaptic connections have only a small influence. Although memory-specific connections are increased in number, the total number of inputs and outputs of neurons undergo only small changes during stimulation. We find that homeostatic structural plasticity induces a specific type of “silent memories”, different from conventional attractor states.  相似文献   

4.
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks.  相似文献   

5.
Although already William James and, more explicitly, Donald Hebb''s theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are “potential synapses” defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the “effectual network connectivity”, that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.  相似文献   

6.
Summary During the last few decades we have seen a convergence among ideas and hypotheses regarding functional principles underlying human memory. Hebb’s now more than fifty years old conjecture concerning synaptic plasticity and cell assemblies, formalized mathematically as attractor neural networks, has remained among the most viable and productive theoretical frameworks. It suggests plausible explanations for Gestalt aspects of active memory like perceptual completion, reconstruction and rivalry. We review the biological plausibility of these theories and discuss some critical issues concerning their associative memory functionality in the light of simulation studies of models with palimpsest memory properties. The focus is on memory properties and dynamics of networks modularized in terms of cortical minicolumns and hypercolumns. Biophysical compartmental models demonstrate attractor dynamics that support cell assembly operations with fast convergence and low firing rates. Using a scaling model we obtain reasonable relative connection densities and amplitudes. An abstract attractor network model reproduces systems level psychological phenomena seen in human memory experiments as the Sternberg and von Restorff effects. We conclude that there is today considerable substance in Hebb’s theory of cell assemblies and its attractor network formulations, and that they have contributed to increasing our understanding of cortical associative memory function. The criticism raised with regard to biological and psychological plausibility as well as low storage capacity, slow retrieval etc has largely been disproved. Rather, this paradigm has gained further support from new experimental data as well as computational modeling.  相似文献   

7.
Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of “dynamical relaying” – a mechanism that relies on a specific network motif – has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.  相似文献   

8.
We show how anomalous time reversal of stimuli and their associated responses can exist in very small connectionist models. These networks are built from dynamical toy model neurons which adhere to a minimal set of biologically plausible properties. The appearance of a “ghost” response, temporally and spatially located in between responses caused by actual stimuli, as in the phi phenomenon, is demonstrated in a similar small network, where it is caused by priming and long-distance feedforward paths. We then demonstrate that the color phi phenomenon can be present in an echo state network, a recurrent neural network, without explicitly training for the presence of the effect, such that it emerges as an artifact of the dynamical processing. Our results suggest that the color phi phenomenon might simply be a feature of the inherent dynamical and nonlinear sensory processing in the brain and in and of itself is not related to consciousness.  相似文献   

9.
Attractor networks successfully account for psychophysical and neurophysiological data in various decision-making tasks. Especially their ability to model persistent activity, a property of many neurons involved in decision-making, distinguishes them from other approaches. Stable decision attractors are, however, counterintuitive to changes of mind. Here we demonstrate that a biophysically-realistic attractor network with spiking neurons, in its itinerant transients towards the choice attractors, can replicate changes of mind observed recently during a two-alternative random-dot motion (RDM) task. Based on the assumption that the brain continues to evaluate available evidence after the initiation of a decision, the network predicts neural activity during changes of mind and accurately simulates reaction times, performance and percentage of changes dependent on difficulty. Moreover, the model suggests a low decision threshold and high incoming activity that drives the brain region involved in the decision-making process into a dynamical regime close to a bifurcation, which up to now lacked evidence for physiological relevance. Thereby, we further affirmed the general conformance of attractor networks with higher level neural processes and offer experimental predictions to distinguish nonlinear attractor from linear diffusion models.  相似文献   

10.
The cerebral cortex is continuously active in the absence of external stimuli. An example of this spontaneous activity is the voltage transition between an Up and a Down state, observed simultaneously at individual neurons. Since this phenomenon could be of critical importance for working memory and attention, its explanation could reveal some fundamental properties of cortical organization. To identify a possible scenario for the dynamics of Up–Down states, we analyze a reduced stochastic dynamical system that models an interconnected network of excitatory neurons with activity-dependent synaptic depression. The model reveals that when the total synaptic connection strength exceeds a certain threshold, the phase space of the dynamical system contains two attractors, interpreted as Up and Down states. In that case, synaptic noise causes transitions between the states. Moreover, an external stimulation producing a depolarization increases the time spent in the Up state, as observed experimentally. We therefore propose that the existence of Up–Down states is a fundamental and inherent property of a noisy neural ensemble with sufficiently strong synaptic connections.  相似文献   

11.
Neuronal avalanches are a form of spontaneous activity widely observed in cortical slices and other types of nervous tissue, both in vivo and in vitro. They are characterized by irregular, isolated population bursts when many neurons fire together, where the number of spikes per burst obeys a power law distribution. We simulate, using the Gillespie algorithm, a model of neuronal avalanches based on stochastic single neurons. The network consists of excitatory and inhibitory neurons, first with all-to-all connectivity and later with random sparse connectivity. Analyzing our model using the system size expansion, we show that the model obeys the standard Wilson-Cowan equations for large network sizes ( neurons). When excitation and inhibition are closely balanced, networks of thousands of neurons exhibit irregular synchronous activity, including the characteristic power law distribution of avalanche size. We show that these avalanches are due to the balanced network having weakly stable functionally feedforward dynamics, which amplifies some small fluctuations into the large population bursts. Balanced networks are thought to underlie a variety of observed network behaviours and have useful computational properties, such as responding quickly to changes in input. Thus, the appearance of avalanches in such functionally feedforward networks indicates that avalanches may be a simple consequence of a widely present network structure, when neuron dynamics are noisy. An important implication is that a network need not be “critical” for the production of avalanches, so experimentally observed power laws in burst size may be a signature of noisy functionally feedforward structure rather than of, for example, self-organized criticality.  相似文献   

12.
The notion of attractor networks is the leading hypothesis for how associative memories are stored and recalled. A defining anatomical feature of such networks is excitatory recurrent connections. These “attract” the firing pattern of the network to a stored pattern, even when the external input is incomplete (pattern completion). The CA3 region of the hippocampus has been postulated to be such an attractor network; however, the experimental evidence has been ambiguous, leading to the suggestion that CA3 is not an attractor network. In order to resolve this controversy and to better understand how CA3 functions, we simulated CA3 and its input structures. In our simulation, we could reproduce critical experimental results and establish the criteria for identifying attractor properties. Notably, under conditions in which there is continuous input, the output should be “attracted” to a stored pattern. However, contrary to previous expectations, as a pattern is gradually “morphed” from one stored pattern to another, a sharp transition between output patterns is not expected. The observed firing patterns of CA3 meet these criteria and can be quantitatively accounted for by our model. Notably, as morphing proceeds, the activity pattern in the dentate gyrus changes; in contrast, the activity pattern in the downstream CA3 network is attracted to a stored pattern and thus undergoes little change. We furthermore show that other aspects of the observed firing patterns can be explained by learning that occurs during behavioral testing. The CA3 thus displays both the learning and recall signatures of an attractor network. These observations, taken together with existing anatomical and behavioral evidence, make the strong case that CA3 constructs associative memories based on attractor dynamics.  相似文献   

13.
It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information.  相似文献   

14.
We propose a top-down approach to the symptoms of schizophrenia based on a statistical dynamical framework. We show that a reduced depth in the basins of attraction of cortical attractor states destabilizes the activity at the network level due to the constant statistical fluctuations caused by the stochastic spiking of neurons. In integrate-and-fire network simulations, a decrease in the NMDA receptor conductances, which reduces the depth of the attractor basins, decreases the stability of short-term memory states and increases distractibility. The cognitive symptoms of schizophrenia such as distractibility, working memory deficits, or poor attention could be caused by this instability of attractor states in prefrontal cortical networks. Lower firing rates are also produced, and in the orbitofrontal and anterior cingulate cortex could account for the negative symptoms, including a reduction of emotions. Decreasing the GABA as well as the NMDA conductances produces not only switches between the attractor states, but also jumps from spontaneous activity into one of the attractors. We relate this to the positive symptoms of schizophrenia, including delusions, paranoia, and hallucinations, which may arise because the basins of attraction are shallow and there is instability in temporal lobe semantic memory networks, leading thoughts to move too freely round the attractor energy landscape.  相似文献   

15.
Renart A  Song P  Wang XJ 《Neuron》2003,38(3):473-485
The concept of bell-shaped persistent neural activity represents a cornerstone of the theory for the internal representation of analog quantities, such as spatial location or head direction. Previous models, however, relied on the unrealistic assumption of network homogeneity. We investigate this issue in a network model where fine tuning of parameters is destroyed by heterogeneities in cellular and synaptic properties. Heterogeneities result in the loss of stored spatial information in a few seconds. Accurate encoding is recovered when a homeostatic mechanism scales the excitatory synapses to each cell to compensate for the heterogeneity in cellular excitability and synaptic inputs. Moreover, the more realistic model produces a wide diversity of tuning curves, as commonly observed in recordings from prefrontal neurons. We conclude that recurrent attractor networks in conjunction with appropriate homeostatic mechanisms provide a robust, biologically plausible theoretical framework for understanding the neural circuit basis of spatial working memory.  相似文献   

16.
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.  相似文献   

17.
18.
The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns and behavior that can be modeled, and suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.  相似文献   

19.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

20.
Neurons display a high degree of variability and diversity in the expression and regulation of their voltage-dependent ionic channels. Under low level of synaptic background a number of physiologically distinct cell types can be identified in most brain areas that display different responses to standard forms of intracellular current stimulation. Nevertheless, it is not well understood how biophysically different neurons process synaptic inputs in natural conditions, i.e., when experiencing intense synaptic bombardment in vivo. While distinct cell types might process synaptic inputs into different patterns of action potentials representing specific “motifs” of network activity, standard methods of electrophysiology are not well suited to resolve such questions. In the current paper we performed dynamic clamp experiments with simulated synaptic inputs that were presented to three types of neurons in the juxtacapsular bed nucleus of stria terminalis (jcBNST) of the rat. Our analysis on the temporal structure of firing showed that the three types of jcBNST neurons did not produce qualitatively different spike responses under identical patterns of input. However, we observed consistent, cell type dependent variations in the fine structure of firing, at the level of single spikes. At the millisecond resolution structure of firing we found high degree of diversity across the entire spectrum of neurons irrespective of their type. Additionally, we identified a new cell type with intrinsic oscillatory properties that produced a rhythmic and regular firing under synaptic stimulation that distinguishes it from the previously described jcBNST cell types. Our findings suggest a sophisticated, cell type dependent regulation of spike dynamics of neurons when experiencing a complex synaptic background. The high degree of their dynamical diversity has implications to their cooperative dynamics and synchronization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号