首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations.  相似文献   

2.
 In this paper, we study the combined dynamics of the neural activity and the synaptic efficiency changes in a fully connected network of biologically realistic neurons with simple synaptic plasticity dynamics including both potentiation and depression. Using a mean-field of technique, we analyzed the equilibrium states of neural networks with dynamic synaptic connections and found a class of bistable networks. For this class of networks, one of the stable equilibrium states shows strong connectivity and coherent responses to external input. In the other stable equilibrium, the network is loosely connected and responds non coherently to external input. Transitions between the two states can be achieved by positively or negatively correlated external inputs. Such networks can therefore switch between their phases according to the statistical properties of the external input. Non-coherent input can only “rcad” the state of the network, while a correlated one can change its state. We speculate that this property, specific for plastic neural networks, can give a clue to understand fully unsupervised learning models. Received: 8 August 1999 / Accepted in revised form: 16 March 2000  相似文献   

3.
This report continues our research into the effectiveness of adaptive synaptogenesis in constructing feed-forward networks which perform good transformations on their inputs. Good transformations are characterized by the maintenance of input information and the removal of statistical dependence. Adaptive synaptogenesis stochastically builds and sculpts a synaptic connectivity in initially unconnected networks using two mechanisms. The first, synaptogenesis, creates new, excitatory, feed-forward connections. The second, associative modification, adjusts the strength of existing synapses. Our previous implementations of synaptogenesis only incorporated a postsynaptic regulatory process, receptivity to new innervation (Adelsberger-Mangan and Levy 1993a, b). In the present study, a presynaptic regulatory process, presynaptic avidity, which regulates the tendency of a presynaptic neuron to participate in a new synaptic connection as a function of its total synaptic weight, is incorporated into the synaptogenesis process. In addition, we investigate a third mechanism, selective synapse removal. This process removes synapses between neurons whose firing is poorly correlated. Networks that are constructed with the presynaptic regulatory process maintain more information and remove more statistical dependence than networks constructed with postsynaptic receptivity and associative modification alone. Selective synapse removal also improves network performance, but only when implemented in conjunction with the presynaptic regulatory process. Received: 20 August 1993/Accepted in revised form: 16 April 1994  相似文献   

4.
Bastian J  Chacron MJ  Maler L 《Neuron》2004,41(5):767-779
Pyramidal cells show marked variation in their morphology, including dendritic structure, which is correlated with physiological diversity; however, it is not known how this variation is related to a cell's role within neural networks. In this report, we describe correlations among electrosensory lateral line lobe (ELL) pyramidal cells' highly variable dendritic morphology and their ability to adaptively cancel redundant inputs via an anti-Hebbian form of synaptic plasticity. A subset of cells, those with the largest apical dendrites, are plastic, but those with the smallest dendrites are not. A model of the network's connectivity predicts that efficient redundancy reduction requires that nonplastic cells provide feedback input to those that are plastic. Anatomical results confirm the model's prediction of optimal network architecture. These results provide a demonstration of different roles for morphological/physiological variants of a single cell type within a neural network performing a well-defined function.  相似文献   

5.
We study the spatiotemporal dynamics of neuronal networks with spike frequency adaptation. In particular, we compare the effects of adaptation being either a linear or nonlinear function of neural activity. We find that altering parameters controlling the strength of synaptic connections in the network can lead to spatially structured activity suggestive of symptoms of hallucinogen persisting perception disorder (HPPD). First, we study how both networks track spatially homogeneous flickering stimuli, and find input is encoded as continuous at lower flicker frequencies when the network??s synapses exhibit more net excitation. Mainly, we study instabilities of stimulus-driven traveling pulse solutions, representative of visual trailing phenomena common to HPPD patients. Visual trails are reported as discrete afterimages in the wake of a moving input. Thus, we analyze several solutions arising in response to moving inputs in both networks: an ON state, stimulus-locked pulses, and traveling breathers. We find traveling breathers can arise in both networks when an input moves beyond a critical speed. These possible neural substrates of visual trails occur at slower speeds when the modulation of synaptic connectivity is increased.  相似文献   

6.
Novel experimental techniques reveal the simultaneous activity of larger and larger numbers of neurons. As a result there is increasing interest in the structure of cooperative--or correlated--activity in neural populations, and in the possible impact of such correlations on the neural code. A fundamental theoretical challenge is to understand how the architecture of network connectivity along with the dynamical properties of single cells shape the magnitude and timescale of correlations. We provide a general approach to this problem by extending prior techniques based on linear response theory. We consider networks of general integrate-and-fire cells with arbitrary architecture, and provide explicit expressions for the approximate cross-correlation between constituent cells. These correlations depend strongly on the operating point (input mean and variance) of the neurons, even when connectivity is fixed. Moreover, the approximations admit an expansion in powers of the matrices that describe the network architecture. This expansion can be readily interpreted in terms of paths between different cells. We apply our results to large excitatory-inhibitory networks, and demonstrate first how precise balance--or lack thereof--between the strengths and timescales of excitatory and inhibitory synapses is reflected in the overall correlation structure of the network. We then derive explicit expressions for the average correlation structure in randomly connected networks. These expressions help to identify the important factors that shape coordinated neural activity in such networks.  相似文献   

7.
8.
The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks.  相似文献   

9.
During development, biological neural networks produce more synapses and neurons than needed. Many of these synapses and neurons are later removed in a process known as neural pruning. Why networks should initially be over-populated, and the processes that determine which synapses and neurons are ultimately pruned, remains unclear. We study the mechanisms and significance of neural pruning in model neural networks. In a deep Boltzmann machine model of sensory encoding, we find that (1) synaptic pruning is necessary to learn efficient network architectures that retain computationally-relevant connections, (2) pruning by synaptic weight alone does not optimize network size and (3) pruning based on a locally-available measure of importance based on Fisher information allows the network to identify structurally important vs. unimportant connections and neurons. This locally-available measure of importance has a biological interpretation in terms of the correlations between presynaptic and postsynaptic neurons, and implies an efficient activity-driven pruning rule. Overall, we show how local activity-dependent synaptic pruning can solve the global problem of optimizing a network architecture. We relate these findings to biology as follows: (I) Synaptic over-production is necessary for activity-dependent connectivity optimization. (II) In networks that have more neurons than needed, cells compete for activity, and only the most important and selective neurons are retained. (III) Cells may also be pruned due to a loss of synapses on their axons. This occurs when the information they convey is not relevant to the target population.  相似文献   

10.
We present an approach for using kinetic theory to capture first and second order statistics of neuronal activity. We coarse grain neuronal networks into populations of neurons and calculate the population average firing rate and output cross-correlation in response to time varying correlated input. We derive coupling equations for the populations based on first and second order statistics of the network connectivity. This coupling scheme is based on the hypothesis that second order statistics of the network connectivity are sufficient to determine second order statistics of neuronal activity. We implement a kinetic theory representation of a simple feed-forward network and demonstrate that the kinetic theory model captures key aspects of the emergence and propagation of correlations in the network, as long as the correlations do not become too strong. By analyzing the correlated activity of feed-forward networks with a variety of connectivity patterns, we provide evidence supporting our hypothesis of the sufficiency of second order connectivity statistics. Action Editor: Carson C. Chow  相似文献   

11.
Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.  相似文献   

12.
It has been considered that the state in the vicinity of a critical point, which is the point between ordered and disordered states, can underlie and facilitate information processing of the brain in various aspects. In this research, we numerically study the influence of criticality on one aspect of brain information processing, i.e., the community structure, which is an important characteristic of complex networks. We examine community structure of the functional connectivity in simulated brain spontaneous activity, which is based on dynamical correlations between neural activity patterns at different positions. The brain spontaneous activity is simulated by a neural field model whose parameter covers subcritical, critical, and supercritical regions. Then, the corresponding dynamical correlation patterns and community structure are compared. In the critical region, we found some distinctive properties, namely high correlation and correlation switching, high modularity and a low number of modules, high stability of the dynamical functional connectivity, and moderate flexibility of the community structure across temporal scales. We also discuss how these characteristics might improve information processing of the brain.  相似文献   

13.
Understanding the direction and quantity of information flowing in neuronal networks is a fundamental problem in neuroscience. Brains and neuronal networks must at the same time store information about the world and react to information in the world. We sought to measure how the activity of the network alters information flow from inputs to output patterns. Using neocortical column neuronal network simulations, we demonstrated that networks with greater internal connectivity reduced input/output correlations from excitatory synapses and decreased negative correlations from inhibitory synapses, measured by Kendall’s τ correlation. Both of these changes were associated with reduction in information flow, measured by normalized transfer entropy (nTE). Information handling by the network reflected the degree of internal connectivity. With no internal connectivity, the feedforward network transformed inputs through nonlinear summation and thresholding. With greater connectivity strength, the recurrent network translated activity and information due to contribution of activity from intrinsic network dynamics. This dynamic contribution amounts to added information drawn from that stored in the network. At still higher internal synaptic strength, the network corrupted the external information, producing a state where little external information came through. The association of increased information retrieved from the network with increased gamma power supports the notion of gamma oscillations playing a role in information processing.  相似文献   

14.
We have previously formulated an abstract dynamical system for networks of spiking neurons and derived a formal result that identifies the criterion for its dynamics, without inputs, to be “sensitive to initial conditions”. Since formal results are applicable only to the extent to which their assumptions are valid, we begin this article by demonstrating that the assumptions are indeed reasonable for a wide range of networks, particularly those that lack overarching structure. A notable aspect of the criterion is the finding that sensitivity does not necessarily arise from randomness of connectivity or of connection strengths, in networks. The criterion guides us to cases that decouple these aspects: we present two instructive examples of networks, one with random connectivity and connection strengths, yet whose dynamics is insensitive, and another with structured connectivity and connection strengths, yet whose dynamics is sensitive. We then argue based on the criterion and the gross electrophysiology of the cortex that the dynamics of cortical networks ought to be almost surely sensitive under conditions typically found there. We supplement this with two examples of networks modeling cortical columns with widely differing qualitative dynamics, yet with both exhibiting sensitive dependence. Next, we use the criterion to construct a network that undergoes bifurcation from sensitive dynamics to insensitive dynamics when the value of a control parameter is varied. Finally, we extend the formal result to networks driven by stationary input spike trains, deriving a superior criterion than previously reported. Action Editor: John Rinzel  相似文献   

15.
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.  相似文献   

16.
Recent experimental results imply that inhibitory postsynaptic potentials can play a functional role in realizing synchronization of neuronal firing in the brain. In order to examine the relation between inhibition and synchronous firing of neurons theoretically, we analyze possible effects of synchronization and sensitivity enhancement caused by inhibitory inputs to neurons with a biologically realistic model of the Hodgkin-Huxley equations. The result shows that, after an inhibitory spike, the firing probability of a single postsynaptic neuron exposed to random excitatory background activity oscillates with time. The oscillation of the firing probability can be related to synchronous firing of neurons receiving an inhibitory spike simultaneously. Further, we show that when an inhibitory spike input precedes an excitatory spike input, the presence of such preceding inhibition raises the firing probability peak of the neuron after the excitatory input. The result indicates that an inhibitory spike input can enhance the sensitivity of the postsynaptic neuron to the following excitatory spike input. Two neural network models based on these effects on postsynaptic neurons caused by inhibitory inputs are proposed to demonstrate possible mechanisms of detecting particular spatiotemporal spike patterns. Received: 15 April 1999 /Accepted in revised form: 25 November 1999  相似文献   

17.
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).  相似文献   

18.
Stimulus properties, attention, and behavioral context influence correlations between the spike times produced by a pair of neurons. However, the biophysical mechanisms that modulate these correlations are poorly understood. With a combined theoretical and experimental approach, we show that the rate of balanced excitatory and inhibitory synaptic input modulates the magnitude and timescale of pairwise spike train correlation. High rate synaptic inputs promote spike time synchrony rather than long timescale spike rate correlations, while low rate synaptic inputs produce opposite results. This correlation shaping is due to a combination of enhanced high frequency input transfer and reduced firing rate gain in the high input rate state compared to the low state. Our study extends neural modulation from single neuron responses to population activity, a necessary step in understanding how the dynamics and processing of neural activity change across distinct brain states.  相似文献   

19.
As the nervous system develops, there is an inherent variability in the connections formed between differentiating neurons. Despite this variability, neural circuits form that are functional and remarkably robust. One way in which neurons deal with variability in their inputs is through compensatory, homeostatic changes in their electrical properties. Here, we show that neurons also make compensatory adjustments to their structure. We analysed the development of dendrites on an identified central neuron (aCC) in the late Drosophila embryo at the stage when it receives its first connections and first becomes electrically active. At the same time, we charted the distribution of presynaptic sites on the developing postsynaptic arbor. Genetic manipulations of the presynaptic partners demonstrate that the postsynaptic dendritic arbor adjusts its growth to compensate for changes in the activity and density of synaptic sites. Blocking the synthesis or evoked release of presynaptic neurotransmitter results in greater dendritic extension. Conversely, an increase in the density of presynaptic release sites induces a reduction in the extent of the dendritic arbor. These growth adjustments occur locally in the arbor and are the result of the promotion or inhibition of growth of neurites in the proximity of presynaptic sites. We provide evidence that suggest a role for the postsynaptic activity state of protein kinase A in mediating this structural adjustment, which modifies dendritic growth in response to synaptic activity. These findings suggest that the dendritic arbor, at least during early stages of connectivity, behaves as a homeostatic device that adjusts its size and geometry to the level and the distribution of input received. The growing arbor thus counterbalances naturally occurring variations in synaptic density and activity so as to ensure that an appropriate level of input is achieved.  相似文献   

20.
Gaseous neurotransmitters such as nitric oxide (NO) provide a unique and often overlooked mechanism for neurons to communicate through diffusion within a network, independent of synaptic connectivity. NO provides homeostatic control of intrinsic excitability. Here we conduct a theoretical investigation of the distinguishing roles of NO-mediated diffusive homeostasis in comparison with canonical non-diffusive homeostasis in cortical networks. We find that both forms of homeostasis provide a robust mechanism for maintaining stable activity following perturbations. However, the resulting networks differ, with diffusive homeostasis maintaining substantial heterogeneity in activity levels of individual neurons, a feature disrupted in networks with non-diffusive homeostasis. This results in networks capable of representing input heterogeneity, and linearly responding over a broader range of inputs than those undergoing non-diffusive homeostasis. We further show that these properties are preserved when homeostatic and Hebbian plasticity are combined. These results suggest a mechanism for dynamically maintaining neural heterogeneity, and expose computational advantages of non-local homeostatic processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号