首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present a modelling framework for cellular evolution that is based on the notion that a cell’s behaviour is driven by interactions with other cells and its immediate environment. We equip each cell with a phenotype that determines its behaviour and implement a decision mechanism to allow evolution of this phenotype. This decision mechanism is modelled using feed-forward neural networks, which have been suggested as suitable models of cell signalling pathways. The environmental variables are presented as inputs to the network and result in a response that corresponds to the phenotype of the cell. The response of the network is determined by the network parameters, which are subject to mutations when the cells divide. This approach is versatile as there are no restrictions on what the input or output nodes represent, they can be chosen to represent any environmental variables and behaviours that are of importance to the cell population under consideration. This framework was implemented in an individual-based model of solid tumour growth in order to investigate the impact of the tissue oxygen concentration on the growth and evolutionary dynamics of the tumour. Our results show that the oxygen concentration affects the tumour at the morphological level, but more importantly has a direct impact on the evolutionary dynamics. When the supply of oxygen is limited we observe a faster divergence away from the initial genotype, a higher population diversity and faster evolution towards aggressive phenotypes. The implementation of this framework suggests that this approach is well suited for modelling systems where evolution plays an important role and where a changing environment exerts selection pressure on the evolving population.  相似文献   

2.
The circuitry of cortical networks involves interacting populations of excitatory (E) and inhibitory (I) neurons whose relationships are now known to a large extent. Inputs to E- and I-cells may have their origins in remote or local cortical areas. We consider a rudimentary model involving E- and I-cells. One of our goals is to test an analytic approach to finding firing rates in neural networks without using a diffusion approximation and to this end we consider in detail networks of excitatory neurons with leaky integrate-and-fire (LIF) dynamics. A simple measure of synchronization, denoted by S(q), where q is between 0 and 100 is introduced. Fully connected E-networks have a large tendency to become dominated by synchronously firing groups of cells, except when inputs are relatively weak. We observed random or asynchronous firing in such networks with diverse sets of parameter values. When such firing patterns were found, the analytical approach was often able to accurately predict average neuronal firing rates. We also considered several properties of E-E networks, distinguishing several kinds of firing pattern. Included were those with silences before or after periods of intense activity or with periodic synchronization. We investigated the occurrence of synchronized firing with respect to changes in the internal excitatory postsynaptic potential (EPSP) magnitude in a network of 100 neurons with fixed values of the remaining parameters. When the internal EPSP size was less than a certain value, synchronization was absent. The amount of synchronization then increased slowly as the EPSP amplitude increased until at a particular EPSP size the amount of synchronization abruptly increased, with S(5) attaining the maximum value of 100%. We also found network frequency transfer characteristics for various network sizes and found a linear dependence of firing frequency over wide ranges of the external afferent frequency, with non-linear effects at lower input frequencies. The theory may also be applied to sparsely connected networks, whose firing behaviour was found to change abruptly as the probability of a connection passed through a critical value. The analytical method was also found to be useful for a feed-forward excitatory network and a network of excitatory and inhibitory neurons.  相似文献   

3.
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.  相似文献   

4.
Neural population dynamics are often highly coordinated, allowing task-related computations to be understood as neural trajectories through low-dimensional subspaces. How the network connectivity and input structure give rise to such activity can be investigated with the aid of low-rank recurrent neural networks, a recently-developed class of computational models which offer a rich theoretical framework linking the underlying connectivity structure to emergent low-dimensional dynamics. This framework has so far relied on the assumption of all-to-all connectivity, yet cortical networks are known to be highly sparse. Here we investigate the dynamics of low-rank recurrent networks in which the connections are randomly sparsified, which makes the network connectivity formally full-rank. We first analyse the impact of sparsity on the eigenvalue spectrum of low-rank connectivity matrices, and use this to examine the implications for the dynamics. We find that in the presence of sparsity, the eigenspectra in the complex plane consist of a continuous bulk and isolated outliers, a form analogous to the eigenspectra of connectivity matrices composed of a low-rank and a full-rank random component. This analogy allows us to characterise distinct dynamical regimes of the sparsified low-rank network as a function of key network parameters. Altogether, we find that the low-dimensional dynamics induced by low-rank connectivity structure are preserved even at high levels of sparsity, and can therefore support rich and robust computations even in networks sparsified to a biologically-realistic extent.  相似文献   

5.
The study of correlations in neural circuits of different size, from the small size of cortical microcolumns to the large-scale organization of distributed networks studied with functional imaging, is a topic of central importance to systems neuroscience. However, a theory that explains how the parameters of mesoscopic networks composed of a few tens of neurons affect the underlying correlation structure is still missing. Here we consider a theory that can be applied to networks of arbitrary size with multiple populations of homogeneous fully-connected neurons, and we focus its analysis to a case of two populations of small size. We combine the analysis of local bifurcations of the dynamics of these networks with the analytical calculation of their cross-correlations. We study the correlation structure in different regimes, showing that a variation of the external stimuli causes the network to switch from asynchronous states, characterized by weak correlation and low variability, to synchronous states characterized by strong correlations and wide temporal fluctuations. We show that asynchronous states are generated by strong stimuli, while synchronous states occur through critical slowing down when the stimulus moves the network close to a local bifurcation. In particular, strongly positive correlations occur at the saddle-node and Andronov-Hopf bifurcations of the network, while strongly negative correlations occur when the network undergoes a spontaneous symmetry-breaking at the branching-point bifurcations. These results show how the correlation structure of firing-rate network models is strongly modulated by the external stimuli, even keeping the anatomical connections fixed. These results also suggest an effective mechanism through which biological networks may dynamically modulate the encoding and integration of sensory information.  相似文献   

6.
The paper presents a methodology for using computational neurogenetic modelling (CNGM) to bring new original insights into how genes influence the dynamics of brain neural networks. CNGM is a novel computational approach to brain neural network modelling that integrates dynamic gene networks with artificial neural network model (ANN). Interaction of genes in neurons affects the dynamics of the whole ANN model through neuronal parameters, which are no longer constant but change as a function of gene expression. Through optimization of interactions within the internal gene regulatory network (GRN), initial gene/protein expression values and ANN parameters, particular target states of the neural network behaviour can be achieved, and statistics about gene interactions can be extracted. In such a way, we have obtained an abstract GRN that contains predictions about particular gene interactions in neurons for subunit genes of AMPA, GABAA and NMDA neuro-receptors. The extent of sequence conservation for 20 subunit proteins of all these receptors was analysed using standard bioinformatics multiple alignment procedures. We have observed abundance of conserved residues but the most interesting observation has been the consistent conservation of phenylalanine (F at position 269) and leucine (L at position 353) in all 20 proteins with no mutations. We hypothesise that these regions can be the basis for mutual interactions. Existing knowledge on evolutionary linkage of their protein families and analysis at molecular level indicate that the expression of these individual subunits should be coordinated, which provides the biological justification for our optimized GRN.  相似文献   

7.
Neural networks are modelling tools that are, in principle, able to capture the input-output behaviour of arbitrary systems that may include the dynamics of animal populations or brain circuits. While a neural network model is useful if it captures phenomenologically the behaviour of the target system in this way, its utility is amplified if key mechanisms of the model can be discovered, and identified with those of the underlying system. In this review, we first describe, at a fairly high level with minimal mathematics, some of the tools used in constructing neural network models. We then go on to discuss the implications of network models for our understanding of the system they are supposed to describe, paying special attention to those models that deal with neural circuits and brain systems. We propose that neural nets are useful for brain modelling if they are viewed in a wider computational framework originally devised by Marr. Here, neural networks are viewed as an intermediate mechanistic abstraction between 'algorithm' and 'implementation', which can provide insights into biological neural representations and their putative supporting architectures.  相似文献   

8.
We present a neural field model of binocular rivalry waves in visual cortex. For each eye we consider a one-dimensional network of neurons that respond maximally to a particular feature of the corresponding image such as the orientation of a grating stimulus. Recurrent connections within each one-dimensional network are assumed to be excitatory, whereas connections between the two networks are inhibitory (cross-inhibition). Slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We derive an analytical expression for the speed of a binocular rivalry wave as a function of various neurophysiological parameters, and show how properties of the wave are consistent with the wave-like propagation of perceptual dominance observed in recent psychophysical experiments. In addition to providing an analytical framework for studying binocular rivalry waves, we show how neural field methods provide insights into the mechanisms underlying the generation of the waves. In particular, we highlight the important role of slow adaptation in providing a “symmetry breaking mechanism” that allows waves to propagate.  相似文献   

9.
The influence of the topology on the asymptotic states of a network of interacting chemical species has been studied by simulating its time evolution. Random and scale-free networks have been designed to support relevant features of activation-deactivation reactions networks (mapping signal transduction networks) and the system of ordinary differential equations associated to the dynamics has been numerically solved. We analysed stationary states of the dynamics as a function of the network's connectivity and of the distribution of the chemical species on the network; we found important differences between the two topologies in the regime of low connectivity. In particular, only for low connected scale-free networks it is possible to find zero activity patterns as stationary states of the dynamics which work as signal off-states. Asymptotic features of random and scale-free networks become similar as the connectivity increases.  相似文献   

10.
11.
Modeling brain dynamics using computational neurogenetic approach   总被引:1,自引:1,他引:0  
The paper introduces a novel computational approach to brain dynamics modeling that integrates dynamic gene–protein regulatory networks with a neural network model. Interaction of genes and proteins in neurons affects the dynamics of the whole neural network. Through tuning the gene–protein interaction network and the initial gene/protein expression values, different states of the neural network dynamics can be achieved. A generic computational neurogenetic model is introduced that implements this approach. It is illustrated by means of a simple neurogenetic model of a spiking neural network of the generation of local field potential. Our approach allows for investigation of how deleted or mutated genes can alter the dynamics of a model neural network. We conclude with the proposal how to extend this approach to model cognitive neurodynamics.
Nikola KasabovEmail:
  相似文献   

12.
Division of labor has been studied separately from a proximate self-organization and an ultimate evolutionary perspective. We aim to bring together these two perspectives. So far this has been done by choosing a behavioral mechanism a priori and considering the evolution of the properties of this mechanism. Here we use artificial neural networks to allow for a more open architecture. We study whether emergent division of labor can evolve in two different network architectures; a simple feedforward network, and a more complex network that includes the possibility of self-feedback from previous experiences. We focus on two aspects of division of labor; worker specialization and the ratio of work performed for each task. Colony fitness is maximized by both reducing idleness and achieving a predefined optimal work ratio. Our results indicate that architectural constraints play an important role for the outcome of evolution. With the simplest network, only genetically determined specialization is possible. This imposes several limitations on worker specialization. Moreover, in order to minimize idleness, networks evolve a biased work ratio, even when an unbiased work ratio would be optimal. By adding self-feedback to the network we increase the network's flexibility and worker specialization evolves under a wider parameter range. Optimal work ratios are more easily achieved with the self-feedback network, but still provide a challenge when combined with worker specialization.  相似文献   

13.
Reconstructing cellular signaling networks and understanding how they work are major endeavors in cell biology. The scale and complexity of these networks, however, render their analysis using experimental biology approaches alone very challenging. As a result, computational methods have been developed and combined with experimental biology approaches, producing powerful tools for the analysis of these networks. These computational methods mostly fall on either end of a spectrum of model parameterization. On one end is a class of structural network analysis methods; these typically use the network connectivity alone to generate hypotheses about global properties. On the other end is a class of dynamic network analysis methods; these use, in addition to the connectivity, kinetic parameters of the biochemical reactions to predict the network's dynamic behavior. These predictions provide detailed insights into the properties that determine aspects of the network's structure and behavior. However, the difficulty of obtaining numerical values of kinetic parameters is widely recognized to limit the applicability of this latter class of methods. Several researchers have observed that the connectivity of a network alone can provide significant insights into its dynamics. Motivated by this fundamental observation, we present the signaling Petri net, a non-parametric model of cellular signaling networks, and the signaling Petri net-based simulator, a Petri net execution strategy for characterizing the dynamics of signal flow through a signaling network using token distribution and sampling. The result is a very fast method, which can analyze large-scale networks, and provide insights into the trends of molecules' activity-levels in response to an external stimulus, based solely on the network's connectivity. We have implemented the signaling Petri net-based simulator in the PathwayOracle toolkit, which is publicly available at http://bioinfo.cs.rice.edu/pathwayoracle. Using this method, we studied a MAPK1,2 and AKT signaling network downstream from EGFR in two breast tumor cell lines. We analyzed, both experimentally and computationally, the activity level of several molecules in response to a targeted manipulation of TSC2 and mTOR-Raptor. The results from our method agreed with experimental results in greater than 90% of the cases considered, and in those where they did not agree, our approach provided valuable insights into discrepancies between known network connectivities and experimental observations.  相似文献   

14.
15.
16.
17.
An evolutionary model of genetic regulatory networks is developed, based on a model of network encoding and dynamics called the Artificial Genome (AG). This model derives a number of specific genes and their interactions from a string of (initially random) bases in an idealized manner analogous to that employed by natural DNA. The gene expression dynamics are determined by updating the gene network as if it were a simple Boolean network. The generic behaviour of the AG model is investigated in detail. In particular, we explore the characteristic network topologies generated by the model, their dynamical behaviours, and the typical variance of network connectivities and network structures. These properties are demonstrated to agree with a probabilistic analysis of the model, and the typical network structures generated by the model are shown to lie between those of random networks and scale-free networks in terms of their degree distribution. Evolutionary processes are simulated using a genetic algorithm, with selection acting on a range of properties from gene number and degree of connectivity through periodic behaviour to specific patterns of gene expression. The evolvability of increasingly complex patterns of gene expression is examined in detail. When a degree of redundancy is introduced, the average number of generations required to evolve given targets is reduced, but limits on evolution of complex gene expression patterns remain. In addition, cyclic gene expression patterns with periods that are multiples of shorter expression patterns are shown to be inherently easier to evolve than others. Constraints imposed by the template-matching nature of the AG model generate similar biases towards such expression patterns in networks in initial populations, in addition to the somewhat scale-free nature of these networks. The significance of these results on current understanding of biological evolution is discussed.  相似文献   

18.
Summary We investigate the phenomenon of epileptiform activity using a discrete model of cortical neural networks. Our model is reduced to the elementary features of neurons and assumes simplified dynamics of action potentials and postsynaptic potentials. The discrete model provides a comparably high simulation speed which allows the rendering of phase diagrams and simulations of large neural networks in reasonable time. Further the reduction to the basic features of neurons provides insight into the essentials of a possible mechanism of epilepsy. Our computer simulations suggest that the detailed dynamics of postsynaptic and action potentials are not indispensable for obtaining epileptiform behavior on the system level. The simulation results of autonomously evolving networks exhibit a regime in which the network dynamics spontaneously switch between fluctuating and oscillating behavior and produce isolated network spikes without external stimulation. Inhibitory neurons have been found to play an important part in the synchronization of neural firing: an increased number of synapses established by inhibitory neurons onto other neurons induces a transition to the spiking regime. A decreased frequency accompanying the hypersynchronous population activity has only occurred with slow inhibitory postsynaptic potentials.  相似文献   

19.
20.
The synchronization frequency of neural networks and its dynamics have important roles in deciphering the working mechanisms of the brain. It has been widely recognized that the properties of functional network synchronization and its dynamics are jointly determined by network topology, network connection strength, i.e., the connection strength of different edges in the network, and external input signals, among other factors. However, mathematical and computational characterization of the relationships between network synchronization frequency and these three important factors are still lacking. This paper presents a novel computational simulation framework to quantitatively characterize the relationships between neural network synchronization frequency and network attributes and input signals. Specifically, we constructed a series of neural networks including simulated small-world networks, real functional working memory network derived from functional magnetic resonance imaging, and real large-scale structural brain networks derived from diffusion tensor imaging, and performed synchronization simulations on these networks via the Izhikevich neuron spiking model. Our experiments demonstrate that both of the network synchronization strength and synchronization frequency change according to the combination of input signal frequency and network self-synchronization frequency. In particular, our extensive experiments show that the network synchronization frequency can be represented via a linear combination of the network self-synchronization frequency and the input signal frequency. This finding could be attributed to an intrinsically-preserved principle in different types of neural systems, offering novel insights into the working mechanism of neural systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号