首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie's method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e. long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e. time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient.  相似文献   

2.
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie''s algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.  相似文献   

3.

Background  

In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal) of an edge represents a class of association (dissociation) reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system.  相似文献   

4.

Background  

Complex biological systems are often modeled as networks of interacting units. Networks of biochemical interactions among proteins, epidemiological contacts among hosts, and trophic interactions in ecosystems, to name a few, have provided useful insights into the dynamical processes that shape and traverse these systems. The degrees of nodes (numbers of interactions) and the extent of clustering (the tendency for a set of three nodes to be interconnected) are two of many well-studied network properties that can fundamentally shape a system. Disentangling the interdependent effects of the various network properties, however, can be difficult. Simple network models can help us quantify the structure of empirical networked systems and understand the impact of various topological properties on dynamics.  相似文献   

5.

Background

Modeling in systems biology is vital for understanding the complexity of biological systems across scales and predicting system-level behaviors. To obtain high-quality pathway databases, it is essential to improve the efficiency of model validation and model update based on appropriate feedback.

Results

We have developed a new method to guide creating novel high-quality biological pathways, using a rule-based validation. Rules are defined to correct models against biological semantics and improve models for dynamic simulation. In this work, we have defined 40 rules which constrain event-specific participants and the related features and adding missing processes based on biological events. This approach is applied to data in Cell System Ontology which is a comprehensive ontology that represents complex biological pathways with dynamics and visualization. The experimental results show that the relatively simple rules can efficiently detect errors made during curation, such as misassignment and misuse of ontology concepts and terms in curated models.

Conclusions

A new rule-based approach has been developed to facilitate model validation and model complementation. Our rule-based validation embedding biological semantics enables us to provide high-quality curated biological pathways. This approach can serve as a preprocessing step for model integration, exchange and extraction data, and simulation.
  相似文献   

6.

Background  

Setting the rules for the identification of a stable conformation of a protein is of utmost importance for the efficient generation of structures in computer simulation. For structure prediction, a considerable number of possible models are generated from which the best model has to be selected.  相似文献   

7.

Background  

We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined.  相似文献   

8.

Background  

Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process.  相似文献   

9.
ABSTRACT: BACKGROUND: Mathematical/computational models are needed to understand cell signaling networks, which are complex. Signaling proteins contain multiple functional components and multiple sites of post-translational modification. The multiplicity of components and sites of modification ensures that interactions among signaling proteins have the potential to generate myriad protein complexes and post-translational modification states. As a result, the number of chemical species that can be populated in a cell signaling network, and hence the number of equations in an ordinary differential equation model required to capture the dynamics of these species, is prohibitively large. To overcome this problem, the rule-based modeling approach has been developed for representing interactions within signaling networks efficiently and compactly through coarse-graining of the chemical kinetics of molecular interactions. RESULTS: Here, we provide a demonstration that the rule-based modeling approach can be used to specify and simulate a large model for ERBB receptor signaling that accounts for site-specific details of protein-protein interactions. The model is considered large because it corresponds to a reaction network containing more reactions than can be practically enumerated. The model encompasses activation of ERK and Akt, and it can be simulated using a network-free simulator, such as NFsim, to generate time courses of phosphorylation for 55 individual serine, threonine, and tyrosine residues. The model is annotated and visualized in the form of an extended contact map. CONCLUSIONS: With the development of software that implements novel computational methods for calculating the dynamics of large-scale rule-based representations of cellular signaling networks, it is now possible to build and analyze models that include a significant fraction of the protein interactions that comprise a signaling network, with incorporation of the site-specific details of the interactions. Modeling at this level of detail is important for understanding cellular signaling.  相似文献   

10.

Background

Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins.

Results

Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts.

Conclusions

Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores.  相似文献   

11.

Background

Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously.

Results

ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website.

Conclusion

ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files.  相似文献   

12.

Background  

Zero-dimensional (lumped parameter) and one dimensional models, based on simplified representations of the components of the cardiovascular system, can contribute strongly to our understanding of circulatory physiology. Zero-D models provide a concise way to evaluate the haemodynamic interactions among the cardiovascular organs, whilst one-D (distributed parameter) models add the facility to represent efficiently the effects of pulse wave transmission in the arterial network at greatly reduced computational expense compared to higher dimensional computational fluid dynamics studies. There is extensive literature on both types of models.  相似文献   

13.
The dynamics of a growing tumor involving mechanical remodeling of healthy tissue and vasculature is neglected in most of the existing tumor models. This is due to the lack of efficient computational framework allowing for simulation of mechanical interactions. Meanwhile, just these interactions trigger critical changes in tumor growth dynamics and are responsible for its volumetric and directional progression. We describe here a novel 3-D model of tumor growth, which combines particle dynamics with cellular automata concept. The particles represent both tissue cells and fragments of the vascular network. They interact with their closest neighbors via semi-harmonic central forces simulating mechanical resistance of the cell walls. The particle dynamics is governed by both the Newtonian laws of motion and the cellular automata rules. These rules can represent cell life-cycle and other biological interactions involving smaller spatio-temporal scales. We show that our complex automata, particle based model can reproduce realistic 3-D dynamics of the entire system consisting of the tumor, normal tissue cells, blood vessels and blood flow. It can explain phenomena such as the inward cell motion in avascular tumor, stabilization of tumor growth by the external pressure, tumor vascularization due to the process of angiogenesis, trapping of healthy cells by invading tumor, and influence of external (boundary) conditions on the direction of tumor progression. We conclude that the particle model can serve as a general framework for designing advanced multiscale models of tumor dynamics and it is very competitive to the modeling approaches presented before.  相似文献   

14.

Background  

Probing the complex fusion of genetic and environmental interactions, metabolic profiling (or metabolomics/metabonomics), the study of small molecules involved in metabolic reactions, is a rapidly expanding 'omics' field. A major technique for capturing metabolite data is 1H-NMR spectroscopy and this yields highly complex profiles that require sophisticated statistical analysis methods. However, experimental data is difficult to control and expensive to obtain. Thus data simulation is a productive route to aid algorithm development.  相似文献   

15.
We consider a model of early events in signaling by the epidermal growth factor (EGF) receptor (EGFR). The model includes EGF, EGFR, the adapter proteins Grb2 and Shc, and the guanine nucleotide exchange factor Sos, which is activated through EGF-induced formation of EGFR-Grb2-Sos and EGFR-Shc-Grb2-Sos assemblies at the plasma membrane. The protein interactions involved in signaling can potentially generate a diversity of protein complexes and phosphoforms; however, this diversity has been largely ignored in models of EGFR signaling. Here, we develop a model that accounts more fully for potential molecular diversity by specifying rules for protein interactions and then using these rules to generate a reaction network that includes all chemical species and reactions implied by the protein interactions. We obtain a model that predicts the dynamics of 356 molecular species, which are connected through 3749 unidirectional reactions. This network model is compared with a previously developed model that includes only 18 chemical species but incorporates the same scope of protein interactions. The predictions of this model are reproduced by the network model, which also yields new predictions. For example, the network model predicts distinct temporal patterns of autophosphorylation for different tyrosine residues of EGFR. A comparison of the two models suggests experiments that could lead to mechanistic insights about competition among adapter proteins for EGFR binding sites and the role of EGFR monomers in signal transduction.  相似文献   

16.
Numerical simulation of differential equation systems plays a major role in the understanding of how metabolic network models generate particular cellular functions. On the other hand, the classical and technical problems for stiff differential equations still remain to be solved, while many elegant algorithms have been presented. To relax the stiffness problem, we propose new practical methods: the gradual update of differential-algebraic equations based on gradual application of the steady-state approximation to stiff differential equations, and the gradual update of the initial values in differential-algebraic equations. These empirical methods show a high efficiency for simulating the steady-state solutions for the stiff differential equations that existing solvers alone cannot solve. They are effective in extending the applicability of dynamic simulation to biochemical network models. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

17.
The flow of information within a cell is governed by a series of protein–protein interactions that can be described as a reaction network. Mathematical models of biochemical reaction networks can be constructed by repetitively applying specific rules that define how reactants interact and what new species are formed on reaction. To aid in understanding the underlying biochemistry, timescale analysis is one method developed to prune the size of the reaction network. In this work, we extend the methods associated with timescale analysis to reaction rules instead of the species contained within the network. To illustrate this approach, we applied timescale analysis to a simple receptor–ligand binding model and a rule‐based model of interleukin‐12 (IL‐12) signaling in naïve CD4+ T cells. The IL‐12 signaling pathway includes multiple protein–protein interactions that collectively transmit information; however, the level of mechanistic detail sufficient to capture the observed dynamics has not been justified based on the available data. The analysis correctly predicted that reactions associated with Janus Kinase 2 and Tyrosine Kinase 2 binding to their corresponding receptor exist at a pseudo‐equilibrium. By contrast, reactions associated with ligand binding and receptor turnover regulate cellular response to IL‐12. An empirical Bayesian approach was used to estimate the uncertainty in the timescales. This approach complements existing rank‐ and flux‐based methods that can be used to interrogate complex reaction networks. Ultimately, timescale analysis of rule‐based models is a computational tool that can be used to reveal the biochemical steps that regulate signaling dynamics. © 2011 American Institute of Chemical Engineers Biotechnol. Prog., 2012  相似文献   

18.

Background  

Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations.  相似文献   

19.
20.

Background  

With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号