首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information.  相似文献   

2.
Collective rhythmic dynamics from neurons is vital for cognitive functions such as memory formation but how neurons self-organize to produce such activity is not well understood. Attractor-based computational models have been successfully implemented as a theoretical framework for memory storage in networks of neurons. Additionally, activity-dependent modification of synaptic transmission is thought to be the physiological basis of learning and memory. The goal of this study is to demonstrate that using a pharmacological treatment that has been shown to increase synaptic strength within in vitro networks of hippocampal neurons follows the dynamical postulates theorized by attractor models. We use a grid of extracellular electrodes to study changes in network activity after this perturbation and show that there is a persistent increase in overall spiking and bursting activity after treatment. This increase in activity appears to recruit more “errant” spikes into bursts. Phase plots indicate a conserved activity pattern suggesting that a synaptic potentiation perturbation to the attractor leaves it unchanged. Lastly, we construct a computational model to demonstrate that these synaptic perturbations can account for the dynamical changes seen within the network.  相似文献   

3.
Brain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storage is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. These memories are stored in a distributed fashion throughout connectivity matrix, and individual synaptic connections have only a small influence. Although memory-specific connections are increased in number, the total number of inputs and outputs of neurons undergo only small changes during stimulation. We find that homeostatic structural plasticity induces a specific type of “silent memories”, different from conventional attractor states.  相似文献   

4.
Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of “dynamical relaying” – a mechanism that relies on a specific network motif – has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.  相似文献   

5.
Autapses are connections between a neuron and itself. These connections are morphologically similar to “normal” synapses between two different neurons, and thus were long thought to have similar properties of synaptic transmission. However, this has not been directly tested. Here, using a micro-island culture assay in which we can define the number of interconnected cells, we directly compared synaptic transmission in excitatory autapses and in two-neuron micronetworks consisting of two excitatory neurons, in which a neuron is connected to one other neuron and to itself. We discovered that autaptic synapses are optimized for maximal transmission, and exhibited enhanced EPSC amplitude, charge, and RRP size compared to interneuronal synapses. However, autapses are deficient in several aspects of synaptic plasticity. Short-term potentiation only became apparent when a neuron was connected to another neuron. This acquisition of plasticity only required reciprocal innervation with one other neuron; micronetworks consisting of just two interconnected neurons exhibited enhanced short-term plasticity in terms of paired pulse ratio (PPR) and release probability (Pr), compared to autapses. Interestingly, when a neuron was connected to another neuron, not only interneuronal synapses, but also the autaptic synapses on itself exhibited a trend toward enhanced short-term plasticity in terms of PPR and Pr. Thus neurons can distinguish whether they are connected via “self” or “non-self” synapses and have the ability to adjust their plasticity parameters when connected to other neurons.  相似文献   

6.
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.  相似文献   

7.
Spike-timing-dependent plasticity (STDP) is believed to structure neuronal networks by slowly changing the strengths (or weights) of the synaptic connections between neurons depending upon their spiking activity, which in turn modifies the neuronal firing dynamics. In this paper, we investigate the change in synaptic weights induced by STDP in a recurrently connected network in which the input weights are plastic but the recurrent weights are fixed. The inputs are divided into two pools with identical constant firing rates and equal within-pool spike-time correlations, but with no between-pool correlations. Our analysis uses the Poisson neuron model in order to predict the evolution of the input synaptic weights and focuses on the asymptotic weight distribution that emerges due to STDP. The learning dynamics induces a symmetry breaking for the individual neurons, namely for sufficiently strong within-pool spike-time correlation each neuron specializes to one of the input pools. We show that the presence of fixed excitatory recurrent connections between neurons induces a group symmetry-breaking effect, in which neurons tend to specialize to the same input pool. Consequently STDP generates a functional structure on the input connections of the network.  相似文献   

8.
We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.  相似文献   

9.
Recent studies have emphasized the importance of multiplex networks – interdependent networks with shared nodes and different types of connections – in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multiplex perspective. Given the wide range of temporally dependent rhythms and phenomena present in neural systems, we chose to examine multiplex networks of individual neurons with time scale dependent connections. To study these networks, we used transfer entropy – an information theoretic quantity that can be used to measure linear and nonlinear interactions – to systematically measure the connectivity between individual neurons at different time scales in cortical and hippocampal slice cultures. We recorded the spiking activity of almost 12,000 neurons across 60 tissue samples using a 512-electrode array with 60 micrometer inter-electrode spacing and 50 microsecond temporal resolution. To the best of our knowledge, this preparation and recording method represents a superior combination of number of recorded neurons and temporal and spatial recording resolutions to any currently available in vivo system. We found that highly connected neurons (“hubs”) were localized to certain time scales, which, we hypothesize, increases the fault tolerance of the network. Conversely, a large proportion of non-hub neurons were not localized to certain time scales. In addition, we found that long and short time scale connectivity was uncorrelated. Finally, we found that long time scale networks were significantly less modular and more disassortative than short time scale networks in both tissue types. As far as we are aware, this analysis represents the first systematic study of temporally dependent multiplex networks among individual neurons.  相似文献   

10.
Episodic memory depends on interactions between the hippocampus and interconnected neocortical regions. Here, using data-driven analyses of resting-state functional magnetic resonance imaging (fMRI) data, we identified the networks that interact with the hippocampus—the default mode network (DMN) and a “medial temporal network” (MTN) that included regions in the medial temporal lobe (MTL) and precuneus. We observed that the MTN plays a critical role in connecting the visual network to the DMN and hippocampus. The DMN could be further divided into 3 subnetworks: a “posterior medial” (PM) subnetwork comprised of posterior cingulate and lateral parietal cortices; an “anterior temporal” (AT) subnetwork comprised of regions in the temporopolar and dorsomedial prefrontal cortex; and a “medial prefrontal” (MP) subnetwork comprised of regions primarily in the medial prefrontal cortex (mPFC). These networks vary in their functional connectivity (FC) along the hippocampal long axis and represent different kinds of information during memory-guided decision-making. Finally, a Neurosynth meta-analysis of fMRI studies suggests new hypotheses regarding the functions of the MTN and DMN subnetworks, providing a framework to guide future research on the neural architecture of episodic memory.

Episodic memory depends on interactions between the hippocampus and interconnected neocortical regions. This study uses network analyses of intrinsic brain networks at rest to identify and characterize brain networks that interact with the hippocampus and have distinct functions during memory-guided decision making.  相似文献   

11.
Stochastic resonance is said to be observed when increases in levels of unpredictable fluctuations—e.g., random noise—cause an increase in a metric of the quality of signal transmission or detection performance, rather than a decrease. This counterintuitive effect relies on system nonlinearities and on some parameter ranges being “suboptimal”. Stochastic resonance has been observed, quantified, and described in a plethora of physical and biological systems, including neurons. Being a topic of widespread multidisciplinary interest, the definition of stochastic resonance has evolved significantly over the last decade or so, leading to a number of debates, misunderstandings, and controversies. Perhaps the most important debate is whether the brain has evolved to utilize random noise in vivo, as part of the “neural code”. Surprisingly, this debate has been for the most part ignored by neuroscientists, despite much indirect evidence of a positive role for noise in the brain. We explore some of the reasons for this and argue why it would be more surprising if the brain did not exploit randomness provided by noise—via stochastic resonance or otherwise—than if it did. We also challenge neuroscientists and biologists, both computational and experimental, to embrace a very broad definition of stochastic resonance in terms of signal-processing “noise benefits”, and to devise experiments aimed at verifying that random variability can play a functional role in the brain, nervous system, or other areas of biology.  相似文献   

12.
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.  相似文献   

13.
It has been a long-standing goal in systems biology to find relations between the topological properties and functional features of protein networks. However, most of the focus in network studies has been on highly connected proteins (“hubs”). As a complementary notion, it is possible to define bottlenecks as proteins with a high betweenness centrality (i.e., network nodes that have many “shortest paths” going through them, analogous to major bridges and tunnels on a highway map). Bottlenecks are, in fact, key connector proteins with surprising functional and dynamic properties. In particular, they are more likely to be essential proteins. In fact, in regulatory and other directed networks, betweenness (i.e., “bottleneck-ness”) is a much more significant indicator of essentiality than degree (i.e., “hub-ness”). Furthermore, bottlenecks correspond to the dynamic components of the interaction network—they are significantly less well coexpressed with their neighbors than nonbottlenecks, implying that expression dynamics is wired into the network topology.  相似文献   

14.
Graph representations of brain connectivity have attracted a lot of recent interest, but existing methods for dividing such graphs into connected subnetworks have a number of limitations in the context of neuroimaging. This is an important problem because most cognitive functions would be expected to involve some but not all brain regions. In this paper we outline a simple approach for decomposing graphs, which may be based on any measure of interregional association, into coherent “principal networks”. The technique is based on an eigendecomposition of the association matrix, and is closely related to principal components analysis. We demonstrate the technique using cortical thickness and diffusion tractography data, showing that the subnetworks which emerge are stable, meaningful and reproducible. Graph-theoretic measures of network cost and efficiency may be calculated separately for each principal network. Unlike some other approaches, all available connectivity information is taken into account, and vertices may appear in none or several of the subnetworks. Subject-by-subject “scores” for each principal network may also be obtained, under certain circumstances, and related to demographic or cognitive variables of interest.  相似文献   

15.
Inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience. The “common input” problem presents a major roadblock: it is difficult to reliably distinguish causal connections between pairs of observed neurons versus correlations induced by common input from unobserved neurons. Available techniques allow us to simultaneously record, with sufficient temporal resolution, only a small fraction of the network. Consequently, naive connectivity estimators that neglect these common input effects are highly biased. This work proposes a “shotgun” experimental design, in which we observe multiple sub-networks briefly, in a serial manner. Thus, while the full network cannot be observed simultaneously at any given time, we may be able to observe much larger subsets of the network over the course of the entire experiment, thus ameliorating the common input problem. Using a generalized linear model for a spiking recurrent neural network, we develop a scalable approximate expected loglikelihood-based Bayesian method to perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin. We demonstrate in simulation that the shotgun experimental design can eliminate the biases induced by common input effects. Networks with thousands of neurons, in which only a small fraction of the neurons is observed in each time bin, can be quickly and accurately estimated, achieving orders of magnitude speed up over previous approaches.  相似文献   

16.
Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network.  相似文献   

17.
We studied the detailed structure of a neuronal network model in which the spontaneous spike activity is correctly optimized to match the experimental data and discuss the reliability of the optimized spike transmission. Two stochastic properties of the spontaneous activity were calculated: the spike-count rate and synchrony size. The synchrony size, expected to be an important factor for optimization of spike transmission in the network, represents a percentage of observed coactive neurons within a time bin, whose probability approximately follows a power-law. We systematically investigated how these stochastic properties could matched to those calculated from the experimental data in terms of the log-normally distributed synaptic weights between excitatory and inhibitory neurons and synaptic background activity induced by the input current noise in the network model. To ensure reliably optimized spike transmission, the synchrony size as well as spike-count rate were simultaneously optimized. This required changeably balanced log-normal distributions of synaptic weights between excitatory and inhibitory neurons and appropriately amplified synaptic background activity. Our results suggested that the inhibitory neurons with a hub-like structure driven by intensive feedback from excitatory neurons were a key factor in the simultaneous optimization of the spike-count rate and synchrony size, regardless of different spiking types between excitatory and inhibitory neurons.  相似文献   

18.
DNA strand displacement technology performs well in sensing and programming DNA segments. In this work, we construct DNA molecular systems based on DNA strand displacement performing computation of logic gates. Specifically, a class of so-called “DNA neurons” are achieved, in which a “smart” way inspired by biological neurons encoding information is developed to encode and deliver information using DNA molecules. The “DNA neuron” is bistable, that is, it can sense DNA molecules as input signals, and release “negative” or “positive” signals DNA molecules. We design intelligent DNA molecular systems that are constructed by cascading some particularly organized “DNA neurons”, which could perform logic computation, including AND, OR, XOR logic gates, automatically. Both simulation results using visual DSD (DNA strand displacement) software and experimental results are obtained, which shows that the proposed systems can detect DNA signals with high sensitivity and accretion; moreover, the systems can process input signals automatically with complex nonlinear logic. The method proposed in this work may provide a new way to construct a sensitive molecular signal detection system with neurons spiking behavior in vitro, and can be used to develop intelligent molecular processing systems in vivo.  相似文献   

19.
Understanding of how neurons transform fluctuations of membrane potential, reflecting input activity, into spike responses, which communicate the ultimate results of single-neuron computation, is one of the central challenges for cellular and computational neuroscience. To study this transformation under controlled conditions, previous work has used a signal immersed in noise paradigm where neurons are injected with a current consisting of fluctuating noise that mimics on-going synaptic activity and a systematic signal whose transmission is studied. One limitation of this established paradigm is that it is designed to examine the encoding of only one signal under a specific, repeated condition. As a result, characterizing how encoding depends on neuronal properties, signal parameters, and the interaction of multiple inputs is cumbersome. Here we introduce a novel fully-defined signal mixture paradigm, which allows us to overcome these problems. In this paradigm, current for injection is synthetized as a sum of artificial postsynaptic currents (PSCs) resulting from the activity of a large population of model presynaptic neurons. PSCs from any presynaptic neuron(s) can be now considered as “signal”, while the sum of all other inputs is considered as “noise”. This allows us to study the encoding of a large number of different signals in a single experiment, thus dramatically increasing the throughput of data acquisition. Using this novel paradigm, we characterize the detection of excitatory and inhibitory PSCs from neuronal spike responses over a wide range of amplitudes and firing-rates. We show, that for moderately-sized neuronal populations the detectability of individual inputs is higher for excitatory than for inhibitory inputs during the 2–5 ms following PSC onset, but becomes comparable after 7–8 ms. This transient imbalance of sensitivity in favor of excitation may enhance propagation of balanced signals through neuronal networks. Finally, we discuss several open questions that this novel high-throughput paradigm may address.  相似文献   

20.
Random network models have been a popular tool for investigating cortical network dynamics. On the scale of roughly a cubic millimeter of cortex, containing about 100,000 neurons, cortical anatomy suggests a more realistic architecture. In this locally connected random network, the connection probability decreases in a Gaussian fashion with the distance between neurons. Here we present three main results from a simulation study of the activity dynamics in such networks. First, for a broad range of parameters these dynamics exhibit a stationary state of asynchronous network activity with irregular single-neuron spiking. This state can be used as a realistic model of ongoing network activity. Parametric dependence of this state and the nature of the network dynamics in other regimes are described. Second, a synchronous excitatory stimulus to a fraction of the neurons results in a strong activity response that easily dominates the network dynamics. And third, due to that activity response an embedding of a divergent-convergent feed-forward subnetwork (as in synfire chains) does not naturally lead to a stable propagation of synchronous activity in the subnetwork; this is in contrast to our earlier findings in isolated subnetworks of that type. Possible mechanisms for stabilizing the interplay of volleys of synchronous spikes and network dynamics by specific learning rules or generalizations of the subnetworks are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号