首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

The Hill coefficient characterizes the extent to which an enzyme exhibits positive or negative cooperativity, but it provides no information regarding the mechanism of cooperativity. In contrast, models based on the equilibrium concept of mass action can suggest mechanisms of cooperativity, but there are often many such models and often many with too many parameters.

Results

Mass action models of tetrameric human thymidine kinase 1 (TK1) activity data were formed as pairs of plausible hypotheses that per site activities and binary dissociation constants are equal within contiguous stretches of the number of substrates bound. Of these, six 3-parameter models were fitted to 5 different datasets. Akaike's Information Criterion was then used to form model probability weighted averages. The literature average of the 5 model averages was K = (0.85, 0.69, 0.65, 0.51) μM and k = (3.3, 3.9, 4.1, 4.1) sec-1 where K and k are per-site binary dissociation constants and activities indexed by the number of substrates bound to the tetrameric enzyme.

Conclusion

The TK1 model presented supports both K and k positive cooperativity. Three-parameter mass action models can and should replace the 3-parameter Hill model.

Reviewers

This article was reviewed by Philip Hahnfeldt, Fangping Mu (nominated by William Hlavacek) and Rainer Sachs.  相似文献   

2.

Background  

Feedback regulation plays crucial roles in the robust control and maintenance of many cellular systems. Negative feedbacks are found to underline both stable and unstable, often oscillatory, behaviours. We explore the dynamical characteristics of systems with single as well as coupled negative feedback loops using a combined approach of analytical and numerical techniques. Particularly, we emphasise how the loop's characterising factors (strength and cooperativity levels) affect system dynamics and how individual loops interact in the coupled-loop systems.  相似文献   

3.
4.

Background  

Flux variability analysis is often used to determine robustness of metabolic models in various simulation conditions. However, its use has been somehow limited by the long computation time compared to other constraint-based modeling methods.  相似文献   

5.

Background  

Gene Regulatory Networks (GRNs) control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs.  相似文献   

6.

Background  

The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations.  相似文献   

7.

Background  

Systems biology models tend to become large since biological systems often consist of complex networks of interacting components, and since the models usually are developed to reflect various mechanistic assumptions of those networks. Nevertheless, not all aspects of the model are equally interesting in a given setting, and normally there are parts that can be reduced without affecting the relevant model performance. There are many methods for model reduction, but few or none of them allow for a restoration of the details of the original model after the simplified model has been simulated.  相似文献   

8.

Motivation  

In the last years more than 20 vertebrate genomes have been sequenced, and the rate at which genomic DNA information becomes available is rapidly accelerating. Gene duplication and gene loss events inherently limit the accuracy of orthology detection based on sequence similarity alone. Fully automated methods for orthology annotation do exist but often fail to identify individual members in cases of large gene families, or to distinguish missing data from traceable gene losses. This situation can be improved in many cases by including conserved synteny information.  相似文献   

9.
10.

Background  

Modellers using the MWC allosteric framework have often found it difficult to validate their models. Indeed many experiments are not conducted with the notion of alternative conformations in mind and therefore do not (or cannot) measure relevant microscopic constant and parameters. Instead, experimentalists widely use the Adair-Klotz approach in order to describe their experimental data.  相似文献   

11.

Background  

Good statistical models for analyzing and simulating multilocus recombination data exist but are not accessible to many biologists because their use requires reasonably sophisticated mathematical and computational implementation. While some labs have direct access to statisticians or programmers competent to carry out such analyses, many labs do not. We have created a platform independent application with an easy-to-use graphical user interface that will carry out such analyses including the simulations needed to bootstrap confidence intervals for the parameters of interest. This software should make multi-locus techniques accessible to labs that previously relied on less powerful and potentially statistically confounded single interval or double interval techniques.  相似文献   

12.
Statistical analysis of real-time PCR data   总被引:1,自引:0,他引:1  

Background  

Even though real-time PCR has been broadly applied in biomedical sciences, data processing procedures for the analysis of quantitative real-time PCR are still lacking; specifically in the realm of appropriate statistical treatment. Confidence interval and statistical significance considerations are not explicit in many of the current data analysis approaches. Based on the standard curve method and other useful data analysis methods, we present and compare four statistical approaches and models for the analysis of real-time PCR data.  相似文献   

13.

Background  

New mathematical models of complex biological structures and computer simulation software allow modelers to simulate and analyze biochemical systems in silico and form mathematical predictions. Due to this potential predictive ability, the use of these models and software has the possibility to compliment laboratory investigations and help refine, or even develop, new hypotheses. However, the existing mathematical modeling techniques and simulation tools are often difficult to use by laboratory biologists without training in high-level mathematics, limiting their use to trained modelers.  相似文献   

14.

Background  

Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions.  相似文献   

15.
16.
17.

Background  

Last years' mapping of diverse genomes has generated huge amounts of biological data which are currently dispersed through many databases. Integration of the information available in the various databases is required to unveil possible associations relating already known data. Biological data are often imprecise and noisy. Fuzzy set theory is specially suitable to model imprecise data while association rules are very appropriate to integrate heterogeneous data.  相似文献   

18.

Background  

Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis.  相似文献   

19.

Background  

Secondary structure prediction is a useful first step toward 3D structure prediction. A number of successful secondary structure prediction methods use neural networks, but unfortunately, neural networks are not intuitively interpretable. On the contrary, hidden Markov models are graphical interpretable models. Moreover, they have been successfully used in many bioinformatic applications. Because they offer a strong statistical background and allow model interpretation, we propose a method based on hidden Markov models.  相似文献   

20.

Background  

Molecular phylogenetic methods are based on alignments of nucleic or peptidic sequences. The tremendous increase in molecular data permits phylogenetic analyses of very long sequences and of many species, but also requires methods to help manage large datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号