首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
《Bio Systems》2007,87(1-3):53-62
The dynamics of activity in interactive neural populations is simulated by the networks of Wilson–Cowan oscillators. Two extreme cases of connection architectures in the networks are considered: (1) 1D and 2D regular and homogeneous grids with local connections and (2) sparse random coupling. Propagating waves in the network have been found under the stationary external input and the regime of partial synchronization has been obtained for the periodic input. It has been shown that in the case of random coupling about 60% of neural populations demonstrate oscillatory activity and some of these oscillations are synchronous. The role of different types of dynamics in information processing is discussed. In particular, we discuss the regime of partial synchronization in the context of cortical microcircuits.  相似文献   

2.
The synchronization frequency of neural networks and its dynamics have important roles in deciphering the working mechanisms of the brain. It has been widely recognized that the properties of functional network synchronization and its dynamics are jointly determined by network topology, network connection strength, i.e., the connection strength of different edges in the network, and external input signals, among other factors. However, mathematical and computational characterization of the relationships between network synchronization frequency and these three important factors are still lacking. This paper presents a novel computational simulation framework to quantitatively characterize the relationships between neural network synchronization frequency and network attributes and input signals. Specifically, we constructed a series of neural networks including simulated small-world networks, real functional working memory network derived from functional magnetic resonance imaging, and real large-scale structural brain networks derived from diffusion tensor imaging, and performed synchronization simulations on these networks via the Izhikevich neuron spiking model. Our experiments demonstrate that both of the network synchronization strength and synchronization frequency change according to the combination of input signal frequency and network self-synchronization frequency. In particular, our extensive experiments show that the network synchronization frequency can be represented via a linear combination of the network self-synchronization frequency and the input signal frequency. This finding could be attributed to an intrinsically-preserved principle in different types of neural systems, offering novel insights into the working mechanism of neural systems.  相似文献   

3.
Summary We investigate the phenomenon of epileptiform activity using a discrete model of cortical neural networks. Our model is reduced to the elementary features of neurons and assumes simplified dynamics of action potentials and postsynaptic potentials. The discrete model provides a comparably high simulation speed which allows the rendering of phase diagrams and simulations of large neural networks in reasonable time. Further the reduction to the basic features of neurons provides insight into the essentials of a possible mechanism of epilepsy. Our computer simulations suggest that the detailed dynamics of postsynaptic and action potentials are not indispensable for obtaining epileptiform behavior on the system level. The simulation results of autonomously evolving networks exhibit a regime in which the network dynamics spontaneously switch between fluctuating and oscillating behavior and produce isolated network spikes without external stimulation. Inhibitory neurons have been found to play an important part in the synchronization of neural firing: an increased number of synapses established by inhibitory neurons onto other neurons induces a transition to the spiking regime. A decreased frequency accompanying the hypersynchronous population activity has only occurred with slow inhibitory postsynaptic potentials.  相似文献   

4.
Borisyuk R 《Bio Systems》2002,67(1-3):3-16
We study the dynamics of activity in the neural networks of enhanced integrate-and-fire elements (with random noise, refractory periods, signal propagation delay, decay of postsynaptic potential, etc.). We consider the networks composed of two interactive populations of excitatory and inhibitory neurons with all-to-all or random sparse connections. It is shown by computer simulations that the regime of regular oscillations is very stable in a broad range of parameter values. In particular, oscillations are possible even in the case of very sparse and randomly distributed inhibitory connections and high background activity. We describe two scenarios of how oscillations may appear which are similar to Andronov-Hopf and saddle-node-on-limit-cycle bifurcations in dynamical systems. The role of oscillatory dynamics for information encoding and processing is discussed.  相似文献   

5.
Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks.  相似文献   

6.
7.
We study spike–burst neural activity and investigate its transitions to synchronized states under electrical coupling. Our reported results include the following: (1) Synchronization of spike–burst activity is a multi-time scale phenomenon and burst synchrony is easier to achieve than spike synchrony. (2) Synchrony of networks with time-delayed connections can be achieved at lower coupling strengths than within the same network with instantaneous couplings. (3) The introduction of parameter dispersion into the network destroys the existence of synchrony in the strict sense, but the network dynamics in major regimes of the parameter space can still be effectively captured by a mean field approach if the couplings are excitatory. Our results on synchronization of spiking networks are general of nature and will aid in the development of minimal models of neuronal populations. The latter are the building blocks of large scale brain networks relevant for cognitive processing.  相似文献   

8.
The circuitry of cortical networks involves interacting populations of excitatory (E) and inhibitory (I) neurons whose relationships are now known to a large extent. Inputs to E- and I-cells may have their origins in remote or local cortical areas. We consider a rudimentary model involving E- and I-cells. One of our goals is to test an analytic approach to finding firing rates in neural networks without using a diffusion approximation and to this end we consider in detail networks of excitatory neurons with leaky integrate-and-fire (LIF) dynamics. A simple measure of synchronization, denoted by S(q), where q is between 0 and 100 is introduced. Fully connected E-networks have a large tendency to become dominated by synchronously firing groups of cells, except when inputs are relatively weak. We observed random or asynchronous firing in such networks with diverse sets of parameter values. When such firing patterns were found, the analytical approach was often able to accurately predict average neuronal firing rates. We also considered several properties of E-E networks, distinguishing several kinds of firing pattern. Included were those with silences before or after periods of intense activity or with periodic synchronization. We investigated the occurrence of synchronized firing with respect to changes in the internal excitatory postsynaptic potential (EPSP) magnitude in a network of 100 neurons with fixed values of the remaining parameters. When the internal EPSP size was less than a certain value, synchronization was absent. The amount of synchronization then increased slowly as the EPSP amplitude increased until at a particular EPSP size the amount of synchronization abruptly increased, with S(5) attaining the maximum value of 100%. We also found network frequency transfer characteristics for various network sizes and found a linear dependence of firing frequency over wide ranges of the external afferent frequency, with non-linear effects at lower input frequencies. The theory may also be applied to sparsely connected networks, whose firing behaviour was found to change abruptly as the probability of a connection passed through a critical value. The analytical method was also found to be useful for a feed-forward excitatory network and a network of excitatory and inhibitory neurons.  相似文献   

9.
Turova TS 《Bio Systems》2002,67(1-3):281-286
The dynamical random graphs associated with a certain class of biological neural networks are introduced and studied. We describe the phase diagram revealing the parameters of a single neuron and of the synaptic strengths which allow formation of the stable strongly connected large groups of neurons. It is shown that the cycles are the most stable structures when the Hebb rule is implemented into the dynamics of the network of excitatory neurons. We discuss the role of cycles for the synchronization of the neuronal activity.  相似文献   

10.
We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.  相似文献   

11.
Rhythmic activity of the brain often depends on synchronized spiking of interneuronal networks interacting with principal neurons. The quest for physiological mechanisms regulating network synchronization has therefore been firmly focused on synaptic circuits. However, it has recently emerged that synaptic efficacy could be influenced by astrocytes that release signalling molecules into their macroscopic vicinity. To understand how this volume-limited synaptic regulation can affect oscillations in neural populations, here we explore an established artificial neural network mimicking hippocampal basket cells receiving inputs from pyramidal cells. We find that network oscillation frequencies and average cell firing rates are resilient to changes in excitatory input even when such changes occur in a significant proportion of participating interneurons, be they randomly distributed or clustered in space. The astroglia-like, volume-limited regulation of excitatory synaptic input appears to better preserve network synchronization (compared with a similar action evenly spread across the network) while leading to a structural segmentation of the network into cell subgroups with distinct firing patterns. These observations provide us with some previously unknown insights into the basic principles of neural network control by astroglia.  相似文献   

12.
The influence of the topology on the asymptotic states of a network of interacting chemical species has been studied by simulating its time evolution. Random and scale-free networks have been designed to support relevant features of activation-deactivation reactions networks (mapping signal transduction networks) and the system of ordinary differential equations associated to the dynamics has been numerically solved. We analysed stationary states of the dynamics as a function of the network's connectivity and of the distribution of the chemical species on the network; we found important differences between the two topologies in the regime of low connectivity. In particular, only for low connected scale-free networks it is possible to find zero activity patterns as stationary states of the dynamics which work as signal off-states. Asymptotic features of random and scale-free networks become similar as the connectivity increases.  相似文献   

13.
14.
Deriving tractable reduced equations of biological neural networks capturing the macroscopic dynamics of sub-populations of neurons has been a longstanding problem in computational neuroscience. In this paper, we propose a reduction of large-scale multi-population stochastic networks based on the mean-field theory. We derive, for a wide class of spiking neuron models, a system of differential equations of the type of the usual Wilson-Cowan systems describing the macroscopic activity of populations, under the assumption that synaptic integration is linear with random coefficients. Our reduction involves one unknown function, the effective non-linearity of the network of populations, which can be analytically determined in simple cases, and numerically computed in general. This function depends on the underlying properties of the cells, and in particular the noise level. Appropriate parameters and functions involved in the reduction are given for different models of neurons: McKean, Fitzhugh-Nagumo and Hodgkin-Huxley models. Simulations of the reduced model show a precise agreement with the macroscopic dynamics of the networks for the first two models.  相似文献   

15.
MOTIVATION: Although there are significant advances on elucidating the collective behaviors on biological organisms in recent years, the essential mechanisms by which the collective rhythms arise remain to be fully understood, and further how to synchronize multicellular networks by artificial control strategy has not yet been well explored. RESULTS: A control strategy is developed to synchronize gene regulatory networks in a multicellular system when spontaneous synchronization cannot be achieved. We first construct an impulsive control system to model the process of periodically injecting coupling substances with constant or random impulsive control amounts into the common extracellular medium, and further study its effects on the dynamics of individual cells. We derive the threshold of synchronization induced by the periodic substance input. Therefore, we can synchronize the multicellular network to a specific collective behavior by changing the frequency and amplitude of the periodic stimuli. Moreover, a two-stage scheme is proposed to facilitate the synchronization in this paper. We show that the presence of the external input may also initiate different dynamics. The multicellular network of coupled repressilators is used to show the effectiveness of the proposed method. The results not only provide a perspective to understand the interactions between external stimuli and intrinsic physiological rhythms, but also may lead to development of realistic artificial control strategy and medical therapy. AVAILABILITY: CONTACT: aihara@sat.t.u-tokyo.ac.jp.  相似文献   

16.
Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, “memories-as-bifurcations,” that differs from the traditional “memories-as-attractors” viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple Hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in our previous study.  相似文献   

17.
Chaotic dynamics generated in a chaotic neural network model are applied to 2-dimensional (2-D) motion control. The change of position of a moving object in each control time step is determined by a motion function which is calculated from the firing activity of the chaotic neural network. Prototype attractors which correspond to simple motions of the object toward four directions in 2-D space are embedded in the neural network model by designing synaptic connection strengths. Chaotic dynamics introduced by changing system parameters sample intermediate points in the high-dimensional state space between the embedded attractors, resulting in motion in various directions. By means of adaptive switching of the system parameters between a chaotic regime and an attractor regime, the object is able to reach a target in a 2-D maze. In computer experiments, the success rate of this method over many trials not only shows better performance than that of stochastic random pattern generators but also shows that chaotic dynamics can be useful for realizing robust, adaptive and complex control function with simple rules.  相似文献   

18.
A dynamic and recurrent artificial neural network was used to investigate the functional properties of firing patterns observed in the primary motor (M1) and the primary somatosensory (S1) cortex of the behaving monkey during control of precision grip force. In the behaving monkey it was found that neurons in M1 and in S1 increase their firing activity with increasing grip force, as do the intrinsic and extrinsic hand muscles implicated in the task. However, some neurons also decreased their activity as a function of increasing force. The functional implication of these latter neurons is not clear and has not been elucidated so far. In order to explore their functional implication, we therefore simulated patterns of neural activity in artificial neural networks that represent cortical, spinal and afferent neural populations and tested whether particular activity profiles would emerge as a function of the input and of the connectivity of these networks. The functional implication of units with emergent or imposed decreasing activity was then explored.Decreasing patterns of activity in M1 units did not emerge from the networks. However, the same networks generated decreasing activity if imposed as target patterns. As indicated by the emerging weight space, M1 projection units with decreasing patterns are functionally less involved in driving alpha motoneurons than units with increasing profiles. Furthermore, these units did not provide significant fusimotor drive, whereas those with increasing profiles did. Fusimotor drive was a function of the (imposed) form of muscle spindle afferent activity: with gamma (fusimotor) drive, muscle spindle afferents provided signals other than muscle length (as observed experimentally). The network solutions thus predict a functional dichotomy between increasing and decreasing M1 neurons: the former primarily drive alpha and gamma motoneurons, the latter only weakly alpha motoneurons.  相似文献   

19.
Learning-induced synchronization of a neural network at various developing stages is studied by computer simulations using a pulse-coupled neural network model in which the neuronal activity is simulated by a one-dimensional map. Two types of Hebbian plasticity rules are investigated and their differences are compared. For both models, our simulations show a logarithmic increase in the synchronous firing frequency of the network with the culturing time of the neural network. This result is consistent with recent experimental observations. To investigate how to control the synchronization behavior of a neural network after learning, we compare the occurrence of synchronization for four networks with different designed patterns under the influence of an external signal. The effect of such a signal on the network activity highly depends on the number of connections between neurons. We discuss the synaptic plasticity and enhancement effects for a random network after learning at various developing stages.  相似文献   

20.
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号