首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
The circuitry of cortical networks involves interacting populations of excitatory (E) and inhibitory (I) neurons whose relationships are now known to a large extent. Inputs to E- and I-cells may have their origins in remote or local cortical areas. We consider a rudimentary model involving E- and I-cells. One of our goals is to test an analytic approach to finding firing rates in neural networks without using a diffusion approximation and to this end we consider in detail networks of excitatory neurons with leaky integrate-and-fire (LIF) dynamics. A simple measure of synchronization, denoted by S(q), where q is between 0 and 100 is introduced. Fully connected E-networks have a large tendency to become dominated by synchronously firing groups of cells, except when inputs are relatively weak. We observed random or asynchronous firing in such networks with diverse sets of parameter values. When such firing patterns were found, the analytical approach was often able to accurately predict average neuronal firing rates. We also considered several properties of E-E networks, distinguishing several kinds of firing pattern. Included were those with silences before or after periods of intense activity or with periodic synchronization. We investigated the occurrence of synchronized firing with respect to changes in the internal excitatory postsynaptic potential (EPSP) magnitude in a network of 100 neurons with fixed values of the remaining parameters. When the internal EPSP size was less than a certain value, synchronization was absent. The amount of synchronization then increased slowly as the EPSP amplitude increased until at a particular EPSP size the amount of synchronization abruptly increased, with S(5) attaining the maximum value of 100%. We also found network frequency transfer characteristics for various network sizes and found a linear dependence of firing frequency over wide ranges of the external afferent frequency, with non-linear effects at lower input frequencies. The theory may also be applied to sparsely connected networks, whose firing behaviour was found to change abruptly as the probability of a connection passed through a critical value. The analytical method was also found to be useful for a feed-forward excitatory network and a network of excitatory and inhibitory neurons.  相似文献   

3.
Spike-timing-dependent plasticity (STDP) is believed to structure neuronal networks by slowly changing the strengths (or weights) of the synaptic connections between neurons depending upon their spiking activity, which in turn modifies the neuronal firing dynamics. In this paper, we investigate the change in synaptic weights induced by STDP in a recurrently connected network in which the input weights are plastic but the recurrent weights are fixed. The inputs are divided into two pools with identical constant firing rates and equal within-pool spike-time correlations, but with no between-pool correlations. Our analysis uses the Poisson neuron model in order to predict the evolution of the input synaptic weights and focuses on the asymptotic weight distribution that emerges due to STDP. The learning dynamics induces a symmetry breaking for the individual neurons, namely for sufficiently strong within-pool spike-time correlation each neuron specializes to one of the input pools. We show that the presence of fixed excitatory recurrent connections between neurons induces a group symmetry-breaking effect, in which neurons tend to specialize to the same input pool. Consequently STDP generates a functional structure on the input connections of the network.  相似文献   

4.
 In a feedforward network of integrate-and-fire neurons, where the firing of each layer is synchronous (synfire chain), the final firing state of the network converges to two attractor states: either a full activation or complete fading of the tailing layers. In this article, we analyze various modes of pattern propagation in a synfire chain with random connection weights and delta-type postsynaptic currents. We predict analytically that when the input is fully synchronized and the network is noise free, varying the characteristics of the weights distribution would result in modes of behavior that are different from those described in the literature. These are convergence to fixed points, limit cycles, multiple periodic, and possibly chaotic dynamics. We checked our analytic results by computer simulation of the network, and showed that the above results can be generalized when the input is asynchronous and neurons are spontaneously active at low rates. Received: 27 July 2001 / Accepted in revised form: 23 October 2001  相似文献   

5.
The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity.  相似文献   

6.
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.  相似文献   

7.
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).  相似文献   

8.
 We investigated the response of a pacemaker neuron model to trains of inhibitory stochastic impulsive perturbations. The model captures the essential aspect of the dynamics of pacemaker neurons. Especially, the model reproduces linearization by stochastic pulse trains, that is, the disappearance of the paradoxical segments in which the output firing rate of pacemaker neurons increases with inhibition rate, as the coefficient of variation of the input pulse train increases. To study the response of the model to stochastic pulse trains, we use a Markov operator governing the phase transition. We show how linearization occurs based on the spectral analysis of the Markov operator. Moreover, using Lyapunov exponents, we show that variable inputs evoke reliable firing, even in situations where periodic stimulation with the same mean rate does not. Received: 30 April 2001 / Accepted in revised form: 19 September 2001  相似文献   

9.
10.
Excitatory and inhibitory synaptic coupling can have counter-intuitive effects on the synchronization of neuronal firing. While it might appear that excitatory coupling would lead to synchronization, we show that frequently inhibition rather than excitation synchronizes firing. We study two identical neurons described by integrate-and-fire models, general phase-coupled models or the Hodgkin-Huxley model with mutual, non-instantaneous excitatory or inhibitory synapses between them. We find that if the rise time of the synapse is longer than the duration of an action potential, inhibition not excitation leads to synchronized firing.  相似文献   

11.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

12.
We explore a computationally efficient method of simulating realistic networks of neurons introduced by Knight, Manin, and Sirovich (1996) in which integrate-and-fire neurons are grouped into large populations of similar neurons. For each population, we form a probability density that represents the distribution of neurons over all possible states. The populations are coupled via stochastic synapses in which the conductance of a neuron is modulated according to the firing rates of its presynaptic populations. The evolution equation for each of these probability densities is a partial differential-integral equation, which we solve numerically. Results obtained for several example networks are tested against conventional computations for groups of individual neurons.We apply this approach to modeling orientation tuning in the visual cortex. Our population density model is based on the recurrent feedback model of a hypercolumn in cat visual cortex of Somers et al. (1995). We simulate the response to oriented flashed bars. As in the Somers model, a weak orientation bias provided by feed-forward lateral geniculate input is transformed by intracortical circuitry into sharper orientation tuning that is independent of stimulus contrast.The population density approach appears to be a viable method for simulating large neural networks. Its computational efficiency overcomes some of the restrictions imposed by computation time in individual neuron simulations, allowing one to build more complex networks and to explore parameter space more easily. The method produces smooth rate functions with one pass of the stimulus and does not require signal averaging. At the same time, this model captures the dynamics of single-neuron activity that are missed in simple firing-rate models.  相似文献   

13.
We investigate the performance of sparsely-connected networks of integrate-and-fire neurons for ultra-short term information processing. We exploit the fact that the population activity of networks with balanced excitation and inhibition can switch from an oscillatory firing regime to a state of asynchronous irregular firing or quiescence depending on the rate of external background spikes. We find that in terms of information buffering the network performs best for a moderate, non-zero, amount of noise. Analogous to the phenomenon of stochastic resonance the performance decreases for higher and lower noise levels. The optimal amount of noise corresponds to the transition zone between a quiescent state and a regime of stochastic dynamics. This provides a potential explanation of the role of non-oscillatory population activity in a simplified model of cortical micro-circuits.  相似文献   

14.
Networks of inhibitory interneurons are found in many distinct classes of biological systems. Inhibitory interneurons govern the dynamics of principal cells and are likely to be critically involved in the coding of information. In this theoretical study, we describe the dynamics of a generic inhibitory network in terms of low-dimensional, simplified rate models. We study the relationship between the structure of external input applied to the network and the patterns of activity arising in response to that stimulation. We found that even a minimal inhibitory network can generate a great diversity of spatio-temporal patterning including complex bursting regimes with non-trivial ratios of burst firing. Despite the complexity of these dynamics, the network’s response patterns can be predicted from the rankings of the magnitudes of external inputs to the inhibitory neurons. This type of invariant dynamics is robust to noise and stable in densely connected networks with strong inhibitory coupling. Our study predicts that the response dynamics generated by an inhibitory network may provide critical insights about the temporal structure of the sensory input it receives.  相似文献   

15.
RV Florian 《PloS one》2012,7(8):e40233
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.  相似文献   

16.
17.
Kim D 《Bio Systems》2004,76(1-3):7-20
Certain species of fireflies show a group behavior of synchronous flashing. Their synchronized and rhythmic flashing has received much attention among many researchers, and there has been a study of biological models for their entrainment of flashing. The synchronous behavior of fireflies resembles the firing synchrony of integrate-and-fire neurons with excitatory or inhibitory connections. This paper shows an analysis of spiking neurons specialized for a firefly flashing model, and provides simulation results of multiple neurons with various transmission delays and coupling strengths. It also explains flashing patterns of some firefly species and examines the synchrony conditions depending on transmission delays and coupling strengths.  相似文献   

18.
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.  相似文献   

19.
Turova TS  Villa AE 《Bio Systems》2007,89(1-3):280-286
This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model.  相似文献   

20.
Recordings from area V4 of monkeys have revealed that when the focus of attention is on a visual stimulus within the receptive field of a cortical neuron, two distinct changes can occur: The firing rate of the neuron can change and there can be an increase in the coherence between spikes and the local field potential (LFP) in the gamma-frequency range (30-50 Hz). The hypothesis explored here is that these observed effects of attention could be a consequence of changes in the synchrony of local interneuron networks. We performed computer simulations of a Hodgkin-Huxley type neuron driven by a constant depolarizing current, I, representing visual stimulation and a modulatory inhibitory input representing the effects of attention via local interneuron networks. We observed that the neuron's firing rate and the coherence of its output spike train with the synaptic inputs was modulated by the degree of synchrony of the inhibitory inputs. When inhibitory synchrony increased, the coherence of spiking model neurons with the synaptic input increased, but the firing rate either increased or remained the same. The mean number of synchronous inhibitory inputs was a key determinant of the shape of the firing rate versus current (f-I) curves. For a large number of inhibitory inputs (approximately 50), the f-I curve saturated for large I and an increase in input synchrony resulted in a shift of sensitivity-the model neuron responded to weaker inputs I. For a small number (approximately 10), the f-I curves were non-saturating and an increase in input synchrony led to an increase in the gain of the response-the firing rate in response to the same input was multiplied by an approximately constant factor. The firing rate modulation with inhibitory synchrony was highest when the input network oscillated in the gamma frequency range. Thus, the observed changes in firing rate and coherence of neurons in the visual cortex could be controlled by top-down inputs that regulated the coherence in the activity of a local inhibitory network discharging at gamma frequencies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号