首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A discrete model of a biological regulatory network can be represented by a discrete function that contains all available information on interactions between network components and the rules governing the evolution of the network in a finite state space. Since the state space size grows exponentially with the number of network components, analysis of large networks is a complex problem. In this paper, we introduce the notion of symbolic steady state that allows us to identify subnetworks that govern the dynamics of the original network in some region of state space. We state rules to explicitly construct attractors of the system from subnetwork attractors. Using the results, we formulate sufficient conditions for the existence of multiple attractors resp. a cyclic attractor based on the existence of positive resp. negative feedback circuits in the graph representing the structure of the system. In addition, we discuss approaches to finding symbolic steady states. We focus both on dynamics derived via synchronous as well as asynchronous update rules. Lastly, we illustrate the results by analyzing a model of T helper cell differentiation.  相似文献   

2.
Directed random graph models frequently are used successfully in modeling the population dynamics of networks of cortical neurons connected by chemical synapses. Experimental results consistently reveal that neuronal network topology is complex, however, in the sense that it differs statistically from a random network, and differs for classes of neurons that are physiologically different. This suggests that complex network models whose subnetworks have distinct topological structure may be a useful, and more biologically realistic, alternative to random networks. Here we demonstrate that the balanced excitation and inhibition frequently observed in small cortical regions can transiently disappear in otherwise standard neuronal-scale models of fluctuation-driven dynamics, solely because the random network topology was replaced by a complex clustered one, whilst not changing the in-degree of any neurons. In this network, a small subset of cells whose inhibition comes only from outside their local cluster are the cause of bistable population dynamics, where different clusters of these cells irregularly switch back and forth from a sparsely firing state to a highly active state. Transitions to the highly active state occur when a cluster of these cells spikes sufficiently often to cause strong unbalanced positive feedback to each other. Transitions back to the sparsely firing state rely on occasional large fluctuations in the amount of non-local inhibition received. Neurons in the model are homogeneous in their intrinsic dynamics and in-degrees, but differ in the abundance of various directed feedback motifs in which they participate. Our findings suggest that (i) models and simulations should take into account complex structure that varies for neuron and synapse classes; (ii) differences in the dynamics of neurons with similar intrinsic properties may be caused by their membership in distinctive local networks; (iii) it is important to identify neurons that share physiological properties and location, but differ in their connectivity.  相似文献   

3.
Specific memory might be stored in a subnetwork consisting of a small population of neurons. To select neurons involved in memory formation, neural competition might be essential. In this paper, we show that excitable neurons are competitive and organize into two assemblies in a recurrent network with spike timing-dependent synaptic plasticity (STDP) and axonal conduction delays. Neural competition is established by the cooperation of spontaneously induced neural oscillation, axonal conduction delays, and STDP. We also suggest that the competition mechanism in this paper is one of the basic functions required to organize memory-storing subnetworks into fine-scale cortical networks.  相似文献   

4.
Does each cognitive task elicit a new cognitive network each time in the brain? Recent data suggest that pre-existing repertoires of a much smaller number of canonical network components are selectively and dynamically used to compute new cognitive tasks. To this end, we propose a novel method (graph-ICA) that seeks to extract these canonical network components from a limited number of resting state spontaneous networks. Graph-ICA decomposes a weighted mixture of source edge-sharing subnetworks with different weighted edges by applying an independent component analysis on cross-sectional brain networks represented as graphs. We evaluated the plausibility in our simulation study and identified 49 intrinsic subnetworks by applying it in the resting state fMRI data. Using the derived subnetwork repertories, we decomposed brain networks during specific tasks including motor activity, working memory exercises, and verb generation, and identified subnetworks associated with performance on these tasks. We also analyzed sex differences in utilization of subnetworks, which was useful in characterizing group networks. These results suggest that this method can effectively be utilized to identify task-specific as well as sex-specific functional subnetworks. Moreover, graph-ICA can provide more direct information on the edge weights among brain regions working together as a network, which cannot be directly obtained through voxel-level spatial ICA.  相似文献   

5.
Intrinsic neuronal and circuit properties control the responses of large ensembles of neurons by creating spatiotemporal patterns of activity that are used for sensory processing, memory formation, and other cognitive tasks. The modeling of such systems requires computationally efficient single-neuron models capable of displaying realistic response properties. We developed a set of reduced models based on difference equations (map-based models) to simulate the intrinsic dynamics of biological neurons. These phenomenological models were designed to capture the main response properties of specific types of neurons while ensuring realistic model behavior across a sufficient dynamic range of inputs. This approach allows for fast simulations and efficient parameter space analysis of networks containing hundreds of thousands of neurons of different types using a conventional workstation. Drawing on results obtained using large-scale networks of map-based neurons, we discuss spatiotemporal cortical network dynamics as a function of parameters that affect synaptic interactions and intrinsic states of the neurons.  相似文献   

6.
Chaos and synchrony in a model of a hypercolumn in visual cortex   总被引:2,自引:0,他引:2  
Neurons in cortical slices emit spikes or bursts of spikes regularly in response to a suprathreshold current injection. This behavior is in marked contrast to the behavior of cortical neurons in vivo, whose response to electrical or sensory input displays a strong degree of irregularity. Correlation measurements show a significant degree of synchrony in the temporal fluctuations of neuronal activities in cortex. We explore the hypothesis that these phenomena are the result of the synchronized chaos generated by the deterministic dynamics of local cortical networks. A model of a hypercolumn in the visual cortex is studied. It consists of two populations of neurons, one inhibitory and one excitatory. The dynamics of the neurons is based on a Hodgkin-Huxley type model of excitable voltage-clamped cells with several cellular and synaptic conductances. A slow potassium current is included in the dynamics of the excitatory population to reproduce the observed adaptation of the spike trains emitted by these neurons. The pattern of connectivity has a spatial structure which is correlated with the internal organization of hypercolumns in orientation columns. Numerical simulations of the model show that in an appropriate parameter range, the network settles in a synchronous chaotic state, characterized by a strong temporal variability of the neural activity which is correlated across the hypercolumn. Strong inhibitory feedback is essential for the stabilization of this state. These results show that the cooperative dynamics of large neuronal networks are capable of generating variability and synchrony similar to those observed in cortex. Auto-correlation and cross-correlation functions of neuronal spike trains are computed, and their temporal and spatial features are analyzed. In other parameter regimes, the network exhibits two additional states: synchronized oscillations and an asynchronous state. We use our model to study cortical mechanisms for orientation selectivity. It is shown that in a suitable parameter regime, when the input is not oriented, the network has a continuum of states, each representing an inhomogeneous population activity which is peaked at one of the orientation columns. As a result, when a weakly oriented input stimulates the network, it yields a sharp orientation tuning. The properties of the network in this regime, including the appearance of virtual rotations and broad stimulus-dependent cross-correlations, are investigated. The results agree with the predictions of the mean field theory which was previously derived for a simplified model of stochastic, two-state neurons. The relation between the results of the model and experiments in visual cortex are discussed.  相似文献   

7.
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks.  相似文献   

8.
 A novel neural network model is presented that learns by trial-and-error to reproduce complex sensory-motor sequences. One subnetwork, corresponding to the prefrontal cortex (PFC), is responsible for generating unique patterns of activity that represent the continuous state of sequence execution. A second subnetwork, corresponding to the striatum, associates these state-encoding patterns with the correct response at each point in the sequence execution. From a neuroscience perspective, the model is based on the known cortical and subcortical anatomy of the primate oculomotor system. From a theoretical perspective, the architecture is similar to that of a finite automaton in which outputs and state transitions are generated as a function of inputs and the current state. Simulation results for complex sequence reproduction and sequence discrimination are presented. Received: 21 July 1994/Accepted in revised form: 21 March 1995  相似文献   

9.
10.
Boolean networks have been widely used to model biological processes lacking detailed kinetic information. Despite their simplicity, Boolean network dynamics can still capture some important features of biological systems such as stable cell phenotypes represented by steady states. For small models, steady states can be determined through exhaustive enumeration of all state transitions. As the number of nodes increases, however, the state space grows exponentially thus making it difficult to find steady states. Over the last several decades, many studies have addressed how to handle such a state space explosion. Recently, increasing attention has been paid to a satisfiability solving algorithm due to its potential scalability to handle large networks. Meanwhile, there still lies a problem in the case of large models with high maximum node connectivity where the satisfiability solving algorithm is known to be computationally intractable. To address the problem, this paper presents a new partitioning-based method that breaks down a given network into smaller subnetworks. Steady states of each subnetworks are identified by independently applying the satisfiability solving algorithm. Then, they are combined to construct the steady states of the overall network. To efficiently apply the satisfiability solving algorithm to each subnetwork, it is crucial to find the best partition of the network. In this paper, we propose a method that divides each subnetwork to be smallest in size and lowest in maximum node connectivity. This minimizes the total cost of finding all steady states in entire subnetworks. The proposed algorithm is compared with others for steady states identification through a number of simulations on both published small models and randomly generated large models with differing maximum node connectivities. The simulation results show that our method can scale up to several hundreds of nodes even for Boolean networks with high maximum node connectivity. The algorithm is implemented and available at http://cps.kaist.ac.kr/∼ckhong/tools/download/PAD.tar.gz.  相似文献   

11.
Suppression of excessively synchronous beta-band oscillatory activity in the brain is believed to suppress hypokinetic motor symptoms of Parkinson’s disease. Recently, a lot of interest has been devoted to desynchronizing delayed feedback deep brain stimulation (DBS). This type of synchrony control was shown to destabilize the synchronized state in networks of simple model oscillators as well as in networks of coupled model neurons. However, the dynamics of the neural activity in Parkinson’s disease exhibits complex intermittent synchronous patterns, far from the idealized synchronous dynamics used to study the delayed feedback stimulation. This study explores the action of delayed feedback stimulation on partially synchronized oscillatory dynamics, similar to what one observes experimentally in parkinsonian patients. We employ a computational model of the basal ganglia networks which reproduces experimentally observed fine temporal structure of the synchronous dynamics. When the parameters of our model are such that the synchrony is unphysiologically strong, the feedback exerts a desynchronizing action. However, when the network is tuned to reproduce the highly variable temporal patterns observed experimentally, the same kind of delayed feedback may actually increase the synchrony. As network parameters are changed from the range which produces complete synchrony to those favoring less synchronous dynamics, desynchronizing delayed feedback may gradually turn into synchronizing stimulation. This suggests that delayed feedback DBS in Parkinson’s disease may boost rather than suppress synchronization and is unlikely to be clinically successful. The study also indicates that delayed feedback stimulation may not necessarily exhibit a desynchronization effect when acting on a physiologically realistic partially synchronous dynamics, and provides an example of how to estimate the stimulation effect.  相似文献   

12.
Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network.  相似文献   

13.
14.
Although recent reports have suggested that synchronous neuronal UP states are mediated by astrocytic activity, the mechanism responsible for this remains unknown. Astrocytic glutamate release synchronously depolarizes adjacent neurons, while synaptic transmissions are blocked. The purpose of this study was to confirm that astrocytic depolarization, propagated through synaptic connections, can lead to synchronous neuronal UP states. We applied astrocytic currents to local neurons in a neural network consisting of model cortical neurons. Our results show that astrocytic depolarization may generate synchronous UP states for hundreds of milliseconds in neurons even if they do not directly receive glutamate release from the activated astrocyte.  相似文献   

15.
16.
Short-term synaptic depression (STD) and spike-frequency adaptation (SFA) are two basic physiological cortical mechanisms for reducing the system's excitability under repetitive stimulation. The computational implications of each one of these mechanisms on information processing have been studied in detail, but not so the dynamics arising from their combination in a realistic biological scenario. We show here, both experimentally with intracellular recordings from cortical slices of the ferret and computationally using a biologically realistic model of a feedforward cortical network, that STD combined with presynaptic SFA results in the resensitization of cortical synaptic efficacies in the course of sustained stimulation. This fundamental effect is then shown in the computational model to have important implications for the network response to time-varying inputs. The main findings are: (1) the addition of SFA to the model endowed with STD improves the network sensitivity to the degree of synchrony in the incoming inputs; (2) presynaptic SFA, whether slow or fast, combined with STD results in postsynaptic neurons responding briskly to abrupt changes in the presynaptic input current and ignoring sustained stimulation, much more effectively than either SFA or STD alone; (3) for slow presynaptic SFA postsynaptic responses to strong inputs decrease inversely to the input, whereas for weak input current to presynaptic neurons transient postsynaptic responses are strongly facilitated, thus enhancing the system's sensitivity for subtle changes in weak presynaptic inputs. Taken together, these results suggest that in systems designed to respond to temporal aspects of the input, SFA and STD might constitute two necessary, linked elements whose simultaneous interplay is important for the performance of the system.  相似文献   

17.
 It has been shown that dynamic recurrent neural networks are successful in identifying the complex mapping relationship between full-wave-rectified electromyographic (EMG) signals and limb trajectories during complex movements. These connectionist models include two types of adaptive parameters: the interconnection weights between the units and the time constants associated to each neuron-like unit; they are governed by continuous-time equations. Due to their internal structure, these models are particularly appropriate to solve dynamical tasks (with time-varying input and output signals). We show in this paper that the introduction of a modular organization dedicated to different aspects of the dynamical mapping including privileged communication channels can refine the architecture of these recurrent networks. We first divide the initial individual network into two communicating subnetworks. These two modules receive the same EMG signals as input but are involved in different identification tasks related to position and acceleration. We then show that the introduction of an artificial distance in the model (using a Gaussian modulation factor of weights) induces a reduced modular architecture based on a self-elimination of null synaptic weights. Moreover, this self-selected reduced model based on two subnetworks performs the identification task better than the original single network while using fewer free parameters (better learning curve and better identification quality). We also show that this modular network exhibits several features that can be considered as biologically plausible after the learning process: self-selection of a specific inhibitory communicating path between both subnetworks after the learning process, appearance of tonic and phasic neurons, and coherent distribution of the values of the time constants within each subnetwork. Received: 17 September 2001 / Accepted in revised form: 15 January 2002  相似文献   

18.
19.
A new structure and training method for multilayer neural networks is presented. The proposed method is based on cascade training of subnetworks and optimizing weights layer by layer. The training procedure is completed in two steps. First, a subnetwork, m inputs and n outputs as the style of training samples, is trained using the training samples. Secondly the outputs of the subnetwork is taken as the inputs and the outputs of the training sample as the desired outputs, another subnetwork with n inputs and n outputs is trained. Finally the two trained subnetworks are connected and a trained multilayer neural networks is created. The numerical simulation results based on both linear least squares back-propagation (LSB) and traditional back-propagation (BP) algorithm have demonstrated the efficiency of the proposed method.  相似文献   

20.
Randomly-connected networks of integrate-and-fire (IF) neurons are known to display asynchronous irregular (AI) activity states, which resemble the discharge activity recorded in the cerebral cortex of awake animals. However, it is not clear whether such activity states are specific to simple IF models, or if they also exist in networks where neurons are endowed with complex intrinsic properties similar to electrophysiological measurements. Here, we investigate the occurrence of AI states in networks of nonlinear IF neurons, such as the adaptive exponential IF (Brette-Gerstner-Izhikevich) model. This model can display intrinsic properties such as low-threshold spike (LTS), regular spiking (RS) or fast-spiking (FS). We successively investigate the oscillatory and AI dynamics of thalamic, cortical and thalamocortical networks using such models. AI states can be found in each case, sometimes with surprisingly small network size of the order of a few tens of neurons. We show that the presence of LTS neurons in cortex or in thalamus, explains the robust emergence of AI states for relatively small network sizes. Finally, we investigate the role of spike-frequency adaptation (SFA). In cortical networks with strong SFA in RS cells, the AI state is transient, but when SFA is reduced, AI states can be self-sustained for long times. In thalamocortical networks, AI states are found when the cortex is itself in an AI state, but with strong SFA, the thalamocortical network displays Up and Down state transitions, similar to intracellular recordings during slow-wave sleep or anesthesia. Self-sustained Up and Down states could also be generated by two-layer cortical networks with LTS cells. These models suggest that intrinsic properties such as adaptation and low-threshold bursting activity are crucial for the genesis and control of AI states in thalamocortical networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号