首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background  

The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems.  相似文献   

2.
ABSTRACT: BACKGROUND: Mathematical/computational models are needed to understand cell signaling networks, which are complex. Signaling proteins contain multiple functional components and multiple sites of post-translational modification. The multiplicity of components and sites of modification ensures that interactions among signaling proteins have the potential to generate myriad protein complexes and post-translational modification states. As a result, the number of chemical species that can be populated in a cell signaling network, and hence the number of equations in an ordinary differential equation model required to capture the dynamics of these species, is prohibitively large. To overcome this problem, the rule-based modeling approach has been developed for representing interactions within signaling networks efficiently and compactly through coarse-graining of the chemical kinetics of molecular interactions. RESULTS: Here, we provide a demonstration that the rule-based modeling approach can be used to specify and simulate a large model for ERBB receptor signaling that accounts for site-specific details of protein-protein interactions. The model is considered large because it corresponds to a reaction network containing more reactions than can be practically enumerated. The model encompasses activation of ERK and Akt, and it can be simulated using a network-free simulator, such as NFsim, to generate time courses of phosphorylation for 55 individual serine, threonine, and tyrosine residues. The model is annotated and visualized in the form of an extended contact map. CONCLUSIONS: With the development of software that implements novel computational methods for calculating the dynamics of large-scale rule-based representations of cellular signaling networks, it is now possible to build and analyze models that include a significant fraction of the protein interactions that comprise a signaling network, with incorporation of the site-specific details of the interactions. Modeling at this level of detail is important for understanding cellular signaling.  相似文献   

3.
Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie's method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e. long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e. time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient.  相似文献   

4.
The molecular complexity of genetic diseases requires novel approaches to break it down into coherent biological modules. For this purpose, many disease network models have been created and analyzed. We highlight two of them, “the human diseases networks” (HDN) and “the orphan disease networks” (ODN). However, in these models, each single node represents one disease or an ambiguous group of diseases. In these cases, the notion of diseases as unique entities reduces the usefulness of network-based methods. We hypothesize that using the clinical features (pathophenotypes) to define pathophenotypic connections between disease-causing genes improve our understanding of the molecular events originated by genetic disturbances. For this, we have built a pathophenotypic similarity gene network (PSGN) and compared it with the unipartite projections (based on gene-to-gene edges) similar to those used in previous network models (HDN and ODN). Unlike these disease network models, the PSGN uses semantic similarities. This pathophenotypic similarity has been calculated by comparing pathophenotypic annotations of genes (human abnormalities of HPO terms) in the “Human Phenotype Ontology”. The resulting network contains 1075 genes (nodes) and 26197 significant pathophenotypic similarities (edges). A global analysis of this network reveals: unnoticed pairs of genes showing significant pathophenotypic similarity, a biological meaningful re-arrangement of the pathological relationships between genes, correlations of biochemical interactions with higher similarity scores and functional biases in metabolic and essential genes toward the pathophenotypic specificity and the pleiotropy, respectively. Additionally, pathophenotypic similarities and metabolic interactions of genes associated with maple syrup urine disease (MSUD) have been used to merge into a coherent pathological module.Our results indicate that pathophenotypes contribute to identify underlying co-dependencies among disease-causing genes that are useful to describe disease modularity.  相似文献   

5.
Hepatitis C virus (HCV) chronically infects over 180 million people worldwide, with over 350,000 estimated deaths attributed yearly to HCV-related liver diseases. It disproportionally affects people who inject drugs (PWID). Currently there is no preventative vaccine and interventions feature long treatment durations with severe side-effects. Upcoming treatments will improve this situation, making possible large-scale treatment interventions. How these strategies should target HCV-infected PWID remains an important unanswered question. Previous models of HCV have lacked empirically grounded contact models of PWID. Here we report results on HCV transmission and treatment using simulated contact networks generated from an empirically grounded network model using recently developed statistical approaches in social network analysis. Our HCV transmission model is a detailed, stochastic, individual-based model including spontaneously clearing nodes. On transmission we investigate the role of number of contacts and injecting frequency on time to primary infection and the role of spontaneously clearing nodes on incidence rates. On treatment we investigate the effect of nine network-based treatment strategies on chronic prevalence and incidence rates of primary infection and re-infection. Both numbers of contacts and injecting frequency play key roles in reducing time to primary infection. The change from “less-” to “more-frequent” injector is roughly similar to having one additional network contact. Nodes that spontaneously clear their HCV infection have a local effect on infection risk and the total number of such nodes (but not their locations) has a network wide effect on the incidence of both primary and re-infection with HCV. Re-infection plays a large role in the effectiveness of treatment interventions. Strategies that choose PWID and treat all their contacts (analogous to ring vaccination) are most effective in reducing the incidence rates of re-infection and combined infection. A strategy targeting infected PWID with the most contacts (analogous to targeted vaccination) is the least effective.  相似文献   

6.
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so.  相似文献   

7.
8.
The tracing of potentially infectious contacts has become an important part of the control strategy for many infectious diseases, from early cases of novel infections to endemic sexually transmitted infections. Here, we make use of mathematical models to consider the case of partner notification for sexually transmitted infection, however these models are sufficiently simple to allow more general conclusions to be drawn. We show that, when contact network structure is considered in addition to contact tracing, standard “mass action” models are generally inadequate. To consider the impact of mutual contacts (specifically clustering) we develop an improvement to existing pairwise network models, which we use to demonstrate that ceteris paribus, clustering improves the efficacy of contact tracing for a large region of parameter space. This result is sometimes reversed, however, for the case of highly effective contact tracing. We also develop stochastic simulations for comparison, using simple re-wiring methods that allow the generation of appropriate comparator networks. In this way we contribute to the general theory of network-based interventions against infectious disease.  相似文献   

9.
We consider the statistical analysis of population structure using genetic data. We show how the two most widely used approaches to modeling population structure, admixture-based models and principal components analysis (PCA), can be viewed within a single unifying framework of matrix factorization. Specifically, they can both be interpreted as approximating an observed genotype matrix by a product of two lower-rank matrices, but with different constraints or prior distributions on these lower-rank matrices. This opens the door to a large range of possible approaches to analyzing population structure, by considering other constraints or priors. In this paper, we introduce one such novel approach, based on sparse factor analysis (SFA). We investigate the effects of the different types of constraint in several real and simulated data sets. We find that SFA produces similar results to admixture-based models when the samples are descended from a few well-differentiated ancestral populations and can recapitulate the results of PCA when the population structure is more “continuous,” as in isolation-by-distance models.  相似文献   

10.

Background

Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously.

Results

ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website.

Conclusion

ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files.  相似文献   

11.
Understanding the assembly processes of symbiont communities, including viromes and microbiomes, is important for improving predictions on symbionts’ biogeography and disease ecology. Here, we use phylogenetic, functional, and geographic filters to predict the similarity between symbiont communities, using as a test case the assembly process in viral communities of Mexican bats. We construct generalized linear models to predict viral community similarity, as measured by the Jaccard index, as a function of differences in host phylogeny, host functionality, and spatial co‐occurrence, evaluating the models using the Akaike information criterion. Two model classes are constructed: a “known” model, where virus–host relationships are based only on data reported in Mexico, and a “potential” model, where viral reports of all the Americas are used, but then applied only to bat species that are distributed in Mexico. Although the “known” model shows only weak dependence on any of the filters, the “potential” model highlights the importance of all three filter types—phylogeny, functional traits, and co‐occurrence—in the assemblage of viral communities. The differences between the “known” and “potential” models highlight the utility of modeling at different “scales” so as to compare and contrast known information at one scale to another one, where, for example, virus information associated with bats is much scarcer.  相似文献   

12.
We evaluated a neural network model for prediction of glucose in critically ill trauma and post-operative cardiothoracic surgical patients. A prospective, feasibility trial evaluating a continuous glucose-monitoring device was performed. After institutional review board approval, clinical data from all consenting surgical intensive care unit patients were converted to an electronic format using novel software. This data was utilized to develop and train a neural network model for real-time prediction of serum glucose concentration implementing a prediction horizon of 75 minutes. Glycemic data from 19 patients were used to “train” the neural network model. Subsequent real-time simulated testing was performed in 5 patients to whom the neural network model was naive. Performance of the model was evaluated by calculating the mean absolute difference percent (MAD%), Clarke Error Grid Analysis, and calculation of the percent of hypoglycemic (≤70 mg/dL), normoglycemic (>70 and <150 mg/dL), and hyperglycemic (≥150 mg/dL) values accurately predicted by the model; 9,405 data points were analyzed. The models successfully predicted trends in glucose in the 5 test patients. Clark Error Grid Analysis indicated that 100.0% of predictions were clinically acceptable with 87.3% and 12.7% of predicted values falling within regions A and B of the error grid respectively. Overall model error (MAD%) was 9.0% with respect to actual continuous glucose modeling data. Our model successfully predicted 96.7% and 53.6% of the normo- and hyperglycemic values respectively. No hypoglycemic events occurred in these patients. Use of neural network models for real-time prediction of glucose in the surgical intensive care unit setting offers healthcare providers potentially useful information which could facilitate optimization of glycemic control, patient safety, and improved care. Similar models can be implemented across a wider scale of biomedical variables to offer real-time optimization, training, and adaptation that increase predictive accuracy and performance of therapies.  相似文献   

13.
The notion of attractor networks is the leading hypothesis for how associative memories are stored and recalled. A defining anatomical feature of such networks is excitatory recurrent connections. These “attract” the firing pattern of the network to a stored pattern, even when the external input is incomplete (pattern completion). The CA3 region of the hippocampus has been postulated to be such an attractor network; however, the experimental evidence has been ambiguous, leading to the suggestion that CA3 is not an attractor network. In order to resolve this controversy and to better understand how CA3 functions, we simulated CA3 and its input structures. In our simulation, we could reproduce critical experimental results and establish the criteria for identifying attractor properties. Notably, under conditions in which there is continuous input, the output should be “attracted” to a stored pattern. However, contrary to previous expectations, as a pattern is gradually “morphed” from one stored pattern to another, a sharp transition between output patterns is not expected. The observed firing patterns of CA3 meet these criteria and can be quantitatively accounted for by our model. Notably, as morphing proceeds, the activity pattern in the dentate gyrus changes; in contrast, the activity pattern in the downstream CA3 network is attracted to a stored pattern and thus undergoes little change. We furthermore show that other aspects of the observed firing patterns can be explained by learning that occurs during behavioral testing. The CA3 thus displays both the learning and recall signatures of an attractor network. These observations, taken together with existing anatomical and behavioral evidence, make the strong case that CA3 constructs associative memories based on attractor dynamics.  相似文献   

14.
Colorectal cancer progresses through an accumulation of somatic mutations, some of which reside in so-called “driver” genes that provide a growth advantage to the tumor. To identify points of intersection between driver gene pathways, we implemented a network analysis framework using protein interactions to predict likely connections – both precedented and novel – between key driver genes in cancer. We applied the framework to find significant connections between two genes, Apc and Cdkn1a (p21), known to be synergistic in tumorigenesis in mouse models. We then assessed the functional coherence of the resulting Apc-Cdkn1a network by engineering in vivo single node perturbations of the network: mouse models mutated individually at Apc (Apc1638N+/−) or Cdkn1a (Cdkn1a−/−), followed by measurements of protein and gene expression changes in intestinal epithelial tissue. We hypothesized that if the predicted network is biologically coherent (functional), then the predicted nodes should associate more specifically with dysregulated genes and proteins than stochastically selected genes and proteins. The predicted Apc-Cdkn1a network was significantly perturbed at the mRNA-level by both single gene knockouts, and the predictions were also strongly supported based on physical proximity and mRNA coexpression of proteomic targets. These results support the functional coherence of the proposed Apc-Cdkn1a network and also demonstrate how network-based predictions can be statistically tested using high-throughput biological data.  相似文献   

15.
The use of computational modeling and simulation has increased in many biological fields, but despite their potential these techniques are only marginally applied in nutritional sciences. Nevertheless, recent applications of modeling have been instrumental in answering important nutritional questions from the cellular up to the physiological levels. Capturing the complexity of today''s important nutritional research questions poses a challenge for modeling to become truly integrative in the consideration and interpretation of experimental data at widely differing scales of space and time. In this review, we discuss a selection of available modeling approaches and applications relevant for nutrition. We then put these models into perspective by categorizing them according to their space and time domain. Through this categorization process, we identified a dearth of models that consider processes occurring between the microscopic and macroscopic scale. We propose a “middle-out” strategy to develop the required full-scale, multilevel computational models. Exhaustive and accurate phenotyping, the use of the virtual patient concept, and the development of biomarkers from “-omics” signatures are identified as key elements of a successful systems biology modeling approach in nutrition research—one that integrates physiological mechanisms and data at multiple space and time scales.  相似文献   

16.
The enzyme cellulase, a multienzyme complex made up of several proteins, catalyzes the conversion of cellulose to glucose in an enzymatic hydrolysis-based biomass-to-ethanol process. Production of cellulase enzyme proteins in large quantities using the fungus Trichoderma reesei requires understanding the dynamics of growth and enzyme production. The method of neural network parameter function modeling, which combines the approximation capabilities of neural networks with fundamental process knowledge, is utilized to develop a mathematical model of this dynamic system. In addition, kinetic models are also developed. Laboratory data from bench-scale fermentations involving growth and protein production by T. reesei on lactose and xylose are used to estimate the parameters in these models. The relative performances of the various models and the results of optimizing these models on two different performance measures are presented. An approximately 33% lower root-mean-squared error (RMSE) in protein predictions and about 40% lower total RMSE is obtained with the neural network-based model as opposed to kinetic models. Using the neural network-based model, the RMSE in predicting optimal conditions for two performance indices, is about 67% and 40% lower, respectively, when compared with the kinetic models. Thus, both model predictions and optimization results from the neural network-based model are found to be closer to the experimental data than the kinetic models developed in this work. It is shown that the neural network parameter function modeling method can be useful as a "macromodeling" technique to rapidly develop dynamic models of a process.  相似文献   

17.

Background

Networks of single interaction types, such as plant-pollinator mutualisms, are biodiversity’s “building blocks”. Yet, the structure of mutualistic and antagonistic networks differs, leaving no unified modeling framework across biodiversity’s component pieces.

Methods/Principal Findings

We use a one-dimensional “niche model” to predict antagonistic and mutualistic species interactions, finding that accuracy decreases with the size of the network. We show that properties of the modeled network structure closely approximate empirical properties even where individual interactions are poorly predicted. Further, some aspects of the structure of the niche space were consistently different between network classes.

Conclusions/Significance

These novel results reveal fundamental differences between the ability to predict ecologically important features of the overall structure of a network and the ability to predict pair-wise species interactions.  相似文献   

18.

Background

Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins.

Results

Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts.

Conclusions

Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores.  相似文献   

19.

Background  

We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined.  相似文献   

20.
Continuous colonization and re-colonization is critical for survival of insect species living in temporary habitats. When insect populations in temporary habitats are depleted, some species may escape extinction by surviving in permanent, but less suitable habitats, in which long-term population survival can be maintained only by immigration from other populations. Such situation has been repeatedly described in nature, but conditions when and how this occurs and how important this phenomenon is for insect metapopulation survival are still poorly known, mainly because it is difficult to study experimentally. Therefore, we used a simulation model to investigate, how environmental stochasticity, growth rate and the incidence of dispersal affect the positive effect of permanent but poor (“sink”) habitats on the likelihood of metapopulation persistence in a network of high quality but temporary (“source”) habitats. This model revealed that permanent habitats substantially increase the probability of metapopulation persistence of insect species with poor dispersal ability if the availability of temporary habitats is spatio-temporally synchronized. Addition of permanent habitats to a system sometimes enabled metapopulation persistence even in cases in which the metapopulation would otherwise go extinct, especially for species with high growth rates. For insect species with low growth rates the probability of a metapopulation persistence strongly depended on the proportions of “source” to “source” and “sink” to “source” dispersal rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号