首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background  

Quantifying the robustness of biochemical models is important both for determining the validity of a natural system model and for designing reliable and robust synthetic biochemical networks. Several tools have been proposed in the literature. Unfortunately, multiparameter robustness analysis suffers from computational limitations.  相似文献   

2.

Background  

The Hill function and the related Hill model are used frequently to study processes in the living cell. There are very few studies investigating the situations in which the model can be safely used. For example, it has been shown, at the mean field level, that the dose response curve obtained from a Hill model agrees well with the dose response curves obtained from a more complicated Adair-Klotz model, provided that the parameters of the Adair-Klotz model describe strongly cooperative binding. However, it has not been established whether such findings can be extended to other properties and non-mean field (stochastic) versions of the same, or other, models.  相似文献   

3.

Background  

Fitting four-parameter sigmoidal models is one of the methods established in the analysis of quantitative real-time PCR (qPCR) data. We had observed that these models are not optimal in the fitting outcome due to the inherent constraint of symmetry around the point of inflection. Thus, we found it necessary to employ a mathematical algorithm that circumvents this problem and which utilizes an additional parameter for accommodating asymmetrical structures in sigmoidal qPCR data.  相似文献   

4.

Background  

When creating mechanistic mathematical models for biological signaling processes it is tempting to include as many known biochemical interactions into one large model as possible. For the JAK-STAT, MAP kinase, and NF-κB pathways a lot of biological insight is available, and as a consequence, large mathematical models have emerged. For large models the question arises whether unknown model parameters can uniquely be determined by parameter estimation from measured data. Systematic approaches to answering this question are indispensable since the uniqueness of model parameter values is essential for predictive mechanistic modeling.  相似文献   

5.

Background  

The success of molecular systems biology hinges on the ability to use computational models to design predictive experiments, and ultimately unravel underlying biological mechanisms. A problem commonly encountered in the computational modelling of biological networks is that alternative, structurally different models of similar complexity fit a set of experimental data equally well. In this case, more than one molecular mechanism can explain available data. In order to rule out the incorrect mechanisms, one needs to invalidate incorrect models. At this point, new experiments maximizing the difference between the measured values of alternative models should be proposed and conducted. Such experiments should be optimally designed to produce data that are most likely to invalidate incorrect model structures.  相似文献   

6.

Background  

Developing methods for understanding the connectivity of signalling pathways is a major challenge in biological research. For this purpose, mathematical models are routinely developed based on experimental observations, which also allow the prediction of the system behaviour under different experimental conditions. Often, however, the same experimental data can be represented by several competing network models.  相似文献   

7.

Background  

Constraint-based models allow the calculation of the metabolic flux states that can be exhibited by cells, standing out as a powerful analytical tool, but they do not determine which of these are likely to be existing under given circumstances. Typical methods to perform these predictions are (a) flux balance analysis, which is based on the assumption that cell behaviour is optimal, and (b) metabolic flux analysis, which combines the model with experimental measurements.  相似文献   

8.

Background  

The endopeptidase encoded by Phex (phosphate-regulating gene with homologies to endopeptidases linked to the X chromosome) is critical for regulation of bone matrix mineralization and phosphate homeostasis. PHEX has been identified from analyses of human X-linked hypophosphatemic rickets and Hyp mutant mouse models. We here demonstrated a newly established dwarfism-like Kbus/Idr mouse line to be a novel Hyp model.  相似文献   

9.

Background  

Mathematical models for revealing the dynamics and interactions properties of biological systems play an important role in computational systems biology. The inference of model parameter values from time-course data can be considered as a "reverse engineering" process and is still one of the most challenging tasks. Many parameter estimation methods have been developed but none of these methods is effective for all cases and can overwhelm all other approaches. Instead, various methods have their advantages and disadvantages. It is worth to develop parameter estimation methods which are robust against noise, efficient in computation and flexible enough to meet different constraints.  相似文献   

10.

Purpose

A scalable life cycle inventory (LCI) model of a permanent magnet electrical machine, containing both design and production data, has been established. The purpose is to contribute with new and easy-to-use data for LCA of electric vehicles by providing a scalable mass estimation and manufacturing inventory for a typical electrical automotive traction machine. The aim of this article (part I of two publications) is to present the machine design, the model structure, and an evaluation of the models’ mass estimations.

Methods

Data for design and production of electrical machines has been compiled from books, scientific papers, benchmarking literature, expert interviews, various specifications, factory records, and a factory site visit. For the design part, one small and one large reference machine were constructed in a software tool, which linked the machines’ maximum ability to deliver torque to the mass of its electromagnetically active parts. Additional data for remaining parts was then gathered separately to make the design complete. The two datasets were combined into one model, which calculates the mass of all motor subparts from an input of maximum power and torque. The range of the model is 20–200 kW and 48–477 Nm. The validity of the model was evaluated through comparison with seven permanent magnet electrical traction machines from established brands.

Results and discussion

The LCI model was successfully implemented to calculate the mass content of 20 different materials in the motor. The models’ mass estimations deviate up to 21% from the examples of real motors, which still falls within expectations for a good result, considering a noticeable variability in design, even for the same machine type and similar requirements. The model results form a rough and reasonable median in comparison to the pattern created by all data points. Also, the reference motors were assessed for performance, showing that the electromagnetic efficiency reaches 96–97%.

Conclusions

The LCI model relies on thorough design data collection and fundamental electromagnetic theory. The selected design has a high efficiency, and the motor is suitable for electric propulsion of vehicles. Furthermore, the LCI model generates representative mass estimations when compared with recently published data for electrical traction machines. Hence, for permanent magnet-type machines, the LCI model may be used as a generic component estimation for LCA of electric vehicles, when specific data is lacking.
  相似文献   

11.

Background  

Gene promoters can be in various epigenetic states and undergo interactions with many molecules in a highly transient, probabilistic and combinatorial way, resulting in a complex global dynamics as observed experimentally. However, models of stochastic gene expression commonly consider promoter activity as a two-state on/off system. We consider here a model of single-gene stochastic expression that can represent arbitrary prokaryotic or eukaryotic promoters, based on the combinatorial interplay between molecules and epigenetic factors, including energy-dependent remodeling and enzymatic activities.  相似文献   

12.

Background  

The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind.  相似文献   

13.

Background  

Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations.  相似文献   

14.
Ma L  Bradu A  Podoleanu AG  Bloor JW 《PloS one》2010,5(12):e14348

Background

Dilated cardiomyopathy (DCM) is a severe cardiac condition that causes high mortality. Many genes have been confirmed to be involved in this disease. An ideal system with which to uncover disease mechanisms would be one that can measure the changes in a wide range of cardiac activities associated with mutations in specific, diversely functional cardiac genes. Such a system needs a genetically manipulable model organism that allows in vivo measurement of cardiac phenotypes and a detecting instrument capable of recording multiple phenotype parameters.

Methodology and Principal Findings

With a simple heart, a transparent body surface at larval stages and available genetic tools we chose Drosophila melanogaster as our model organism and developed for it a dual en-face/Doppler optical coherence tomography (OCT) instrument capable of recording multiple aspects of heart activity, including heart contraction cycle dynamics, ostia dynamics, heartbeat rate and rhythm, speed of heart wall movement and light reflectivity of cardiomyocytes in situ. We applied this OCT instrument to a model of Tropomyosin-associated DCM established in adult Drosophila. We show that DCM pre-exists in the larval stage and is accompanied by an arrhythmia previously unidentified in this model. We also detect reduced mobility and light reflectivity of cardiomyocytes in mutants.

Conclusion

These results demonstrate the capability of our OCT instrument to characterize in detail cardiac activity in genetic models for heart disease in Drosophila.  相似文献   

15.

Background  

The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus.  相似文献   

16.

Background  

Short and valid measures of the impact of a stroke on integration are required in health and social settings. The Subjective Index of Physical and Social Outcome (SIPSO) is one such measure. However, there are questions whether scores can be summed into a total score or whether subscale scores should be calculated. This paper aims to provide clarity on the internal construct validity of the subscales and the total scale.  相似文献   

17.

Background  

Protein tertiary structure prediction is a fundamental problem in computational biology and identifying the most native-like model from a set of predicted models is a key sub-problem. Consensus methods work well when the redundant models in the set are the most native-like, but fail when the most native-like model is unique. In contrast, structure-based methods score models independently and can be applied to model sets of any size and redundancy level. Additionally, structure-based methods have a variety of important applications including analogous fold recognition, refinement of sequence-structure alignments, and de novo prediction. The purpose of this work was to develop a structure-based model selection method based on predicted structural features that could be applied successfully to any set of models.  相似文献   

18.

Background  

Stable transgenesis is an undeniable key to understanding any genetic system. Retrovirus-based insertional strategies, which feature several technical challenges when they are used, are often limited to one particular species, and even sometimes to a particular cell type as the infection depends on certain cellular receptors. A universal-like system, which would allow both stable transgene expression independent of the cell type and an efficient sorting of transfected cells, is required when handling cellular models that are incompatible with retroviral strategies.  相似文献   

19.

Background

Models of biochemical systems are typically complex, which may complicate the discovery of cardinal biochemical principles. It is therefore important to single out the parts of a model that are essential for the function of the system, so that the remaining non-essential parts can be eliminated. However, each component of a mechanistic model has a clear biochemical interpretation, and it is desirable to conserve as much of this interpretability as possible in the reduction process. Furthermore, it is of great advantage if we can translate predictions from the reduced model to the original model.

Results

In this paper we present a novel method for model reduction that generates reduced models with a clear biochemical interpretation. Unlike conventional methods for model reduction our method enables the mapping of predictions by the reduced model to the corresponding detailed predictions by the original model. The method is based on proper lumping of state variables interacting on short time scales and on the computation of fraction parameters, which serve as the link between the reduced model and the original model. We illustrate the advantages of the proposed method by applying it to two biochemical models. The first model is of modest size and is commonly occurring as a part of larger models. The second model describes glucose transport across the cell membrane in baker's yeast. Both models can be significantly reduced with the proposed method, at the same time as the interpretability is conserved.

Conclusions

We introduce a novel method for reduction of biochemical models that is compatible with the concept of zooming. Zooming allows the modeler to work on different levels of model granularity, and enables a direct interpretation of how modifications to the model on one level affect the model on other levels in the hierarchy. The method extends the applicability of the method that was previously developed for zooming of linear biochemical models to nonlinear models.  相似文献   

20.

Background  

The construction of complex spatial simulation models such as those used in network epidemiology, is a daunting task due to the large amount of data involved in their parameterization. Such data, which frequently resides on large geo-referenced databases, has to be processed and assigned to the various components of the model. All this just to construct the model, then it still has to be simulated and analyzed under different epidemiological scenarios. This workflow can only be achieved efficiently by computational tools that can automate most, if not all, these time-consuming tasks. In this paper, we present a simulation software, Epigrass, aimed to help designing and simulating network-epidemic models with any kind of node behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号