首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Mathematical modeling is an indispensable tool for research and development in biotechnology and bioengineering. The formulation of kinetic models of biochemical networks depends on knowledge of the kinetic properties of the enzymes of the individual reactions. However, kinetic data acquired from experimental observations bring along uncertainties due to various experimental conditions and measurement methods. In this contribution, we propose a novel way to model the uncertainty in the enzyme kinetics and to predict quantitatively the responses of metabolic reactions to the changes in enzyme activities under uncertainty. The proposed methodology accounts explicitly for mechanistic properties of enzymes and physico‐chemical and thermodynamic constraints, and is based on formalism from systems theory and metabolic control analysis. We achieve this by observing that kinetic responses of metabolic reactions depend: (i) on the distribution of the enzymes among their free form and all reactive states; (ii) on the equilibrium displacements of the overall reaction and that of the individual enzymatic steps; and (iii) on the net fluxes through the enzyme. Relying on this observation, we develop a novel, efficient Monte Carlo sampling procedure to generate all states within a metabolic reaction that satisfy imposed constrains. Thus, we derive the statistics of the expected responses of the metabolic reactions to changes in enzyme levels and activities, in the levels of metabolites, and in the values of the kinetic parameters. We present aspects of the proposed framework through an example of the fundamental three‐step reversible enzymatic reaction mechanism. We demonstrate that the equilibrium displacements of the individual enzymatic steps have an important influence on kinetic responses of the enzyme. Furthermore, we derive the conditions that must be satisfied by a reversible three‐step enzymatic reaction operating far away from the equilibrium in order to respond to changes in metabolite levels according to the irreversible Michelis–Menten kinetics. The efficient sampling procedure allows easy, scalable, implementation of this methodology to modeling of large‐scale biochemical networks. Biotechnol. Bioeng. 2011;108: 413–423. © 2010 Wiley Periodicals, Inc.  相似文献   

2.
3.
This work studies the impact of systematic uncertainties associated to interaction cross sections on depth dose curves determined by Monte Carlo simulations. The corresponding sensitivity factors are quantified by changing cross sections by a given amount and determining the variation in the dose. The influence of total and partial photon cross sections is addressed. Partial cross sections for Compton and Rayleigh scattering, photo-electric effect, and pair production have been accounted for. The PENELOPE code was used in all simulations. It was found that photon cross section sensitivity factors depend on depth. In addition, they are positive and negative for depths below and above an equilibrium depth, respectively. At this depth, sensitivity factors are null. The equilibrium depths found in this work agree very well with the mean free path of the corresponding incident photon energy. Using the sensitivity factors reported here, it is possible to estimate the impact of photon cross section uncertainties on the uncertainty of Monte Carlo-determined depth dose curves.  相似文献   

4.
The results of quantitative risk assessments are key factors in a risk manager's decision of the necessity to implement actions to reduce risk. The extent of the uncertainty in the assessment will play a large part in the degree of confidence a risk manager has in the reported significance and probability of a given risk. The two main sources of uncertainty in such risk assessments are variability and incertitude. In this paper we use two methods, a second-order two-dimensional Monte Carlo analysis and probability bounds analysis, to investigate the impact of both types of uncertainty on the results of a food-web exposure model. We demonstrate how the full extent of uncertainty in a risk estimate can be fully portrayed in a way that is useful to risk managers. We show that probability bounds analysis is a useful tool for identifying the parameters that contribute the most to uncertainty in a risk estimate and how it can be used to complement established practices in risk assessment. We conclude by promoting the use of probability analysis in conjunction with Monte Carlo analyses as a method for checking how plausible Monte Carlo results are in the full context of uncertainty.  相似文献   

5.
The traditional q1 * methodology for constructing upper confidence limits (UCLs) for the low-dose slopes of quantal dose-response functions has two limitations: (i) it is based on an asymptotic statistical result that has been shown via Monte Carlo simulation not to hold in practice for small, real bioassay experiments (Portier and Hoel, 1983); and (ii) it assumes that the multistage model (which represents cumulative hazard as a polynomial function of dose) is correct. This paper presents an uncertainty analysis approach for fitting dose-response functions to data that does not require specific parametric assumptions or depend on asymptotic results. It has the advantage that the resulting estimates of the dose-response function (and uncertainties about it) no longer depend on the validity of an assumed parametric family nor on the accuracy of the asymptotic approximation. The method derives posterior densities for the true response rates in the dose groups, rather than deriving posterior densities for model parameters, as in other Bayesian approaches (Sielken, 1991), or resampling the observed data points, as in the bootstrap and other resampling methods. It does so by conditioning constrained maximum-entropy priors on the observed data. Monte Carlo sampling of the posterior (constrained, conditioned) probability distributions generate values of response probabilities that might be observed if the experiment were repeated with very large sample sizes. A dose-response curve is fit to each such simulated dataset. If no parametric model has been specified, then a generalized representation (e.g., a power-series or orthonormal polynomial expansion) of the unknown dose-response function is fit to each simulated dataset using “model-free” methods. The simulation-based frequency distribution of all the dose-response curves fit to the simulated datasets yields a posterior distribution function for the low-dose slope of the dose-response curve. An upper confidence limit on the low-dose slope is obtained directly from this posterior distribution. This “Data Cube” procedure is illustrated with a real dataset for benzene, and is seen to produce more policy-relevant insights than does the traditional q1 * methodology. For example, it shows how far apart are the 90%, 95%, and 99% limits and reveals how uncertainty about total and incremental risk vary with dose level (typically being dominated at low doses by uncertainty about the response of the control group, and being dominated at high doses by sampling variability). Strengths and limitations of the Data Cube approach are summarized, and potential decision-analytic applications to making better informed risk management decisions are briefly discussed.  相似文献   

6.
We examine bias in Markov models of diseases, including both chronic and infectious diseases. We consider two common types of Markov disease models: ones where disease progression changes by severity of disease, and ones where progression of disease changes in time or by age. We find sufficient conditions for bias to exist in models with aggregated transition probabilities when compared to models with state/time dependent transition probabilities. We also find that when aggregating data to compute transition probabilities, bias increases with the degree of data aggregation. We illustrate by examining bias in Markov models of Hepatitis C, Alzheimer’s disease, and lung cancer using medical data and find that the bias is significant depending on the method used to aggregate the data. A key implication is that by not incorporating state/time dependent transition probabilities, studies that use Markov models of diseases may be significantly overestimating or underestimating disease progression. This could potentially result in incorrect recommendations from cost-effectiveness studies and incorrect disease burden forecasts.  相似文献   

7.
Chen Q  Ibrahim JG 《Biometrics》2006,62(1):177-184
We consider a class of semiparametric models for the covariate distribution and missing data mechanism for missing covariate and/or response data for general classes of regression models including generalized linear models and generalized linear mixed models. Ignorable and nonignorable missing covariate and/or response data are considered. The proposed semiparametric model can be viewed as a sensitivity analysis for model misspecification of the missing covariate distribution and/or missing data mechanism. The semiparametric model consists of a generalized additive model (GAM) for the covariate distribution and/or missing data mechanism. Penalized regression splines are used to express the GAMs as a generalized linear mixed effects model, in which the variance of the corresponding random effects provides an intuitive index for choosing between the semiparametric and parametric model. Maximum likelihood estimates are then obtained via the EM algorithm. Simulations are given to demonstrate the methodology, and a real data set from a melanoma cancer clinical trial is analyzed using the proposed methods.  相似文献   

8.
The purpose of this note is to illustrate the feasibility of simulating kinetic systems, such as commonly encountered in photosynthesis research, using the Monte Carlo (MC) method. In this approach, chemical events are considered at the molecular level where they occur randomly and the macroscopic kinetic evolution results from averaging a large number of such events. Their repeated simulation is easily accomplished using digital computing. It is shown that the MC approach is well suited to the capabilities and resources of modern microcomputers. A software package is briefly described and discussed, allowing a simple programming of any kinetic model system and its resolution. The execution is reasonably fast and accurate; it is not subject to such instabilities as found with the conventional analytical approach.Abbreviations MC Monte Carlo - RN random number - PSU photosynthetic unit Dedicated to Prof. L.N.M. Duysens on the occasion of his retirement.  相似文献   

9.
PurposeTo quantify the influence of different skin models on mammographic breast dosimetry, based on dosimetric protocols and recent breast skin thickness findings.MethodsBy using an adapted PENELOPE (v. 2014) + PenEasy (v. 2015) Monte Carlo (MC) code, simulations were performed in order to obtain the mean glandular dose (MGD), the normalized MGD by incident air Kerma (DgN), and the glandular depth dose (GDD(z)). The geometry was based on a cranio-caudal mammographic examination. Monoenergetic and polyenergetic beams were implemented, for a breast thickness from 2 cm to 9 cm, with different compositions. Seven skin models were used: a 5 mm adipose layer; a skin layer ranging from 5 mm to 1.45 mm, a 1.45 mm skin thickness with a subcutaneous adipose layer of 2 mm and 3.55 mm.ResultsThe differences, for monoenergetic beams, are higher (up to 200%) for lower energies (8 keV), thicker and low glandular content breasts, decreasing to less than 5% at 40 keV. Without a skin layer, the differences reach a maximum of 1240%. The relative difference in DgN values for 1.45 mm skin and 5 mm adipose layers and polyenergetic beams varies from −14% to 12%.ConclusionsThe implemented MC code is suitable for mammography dosimetry calculations. The skin models have major impacts on MGD values, and the results complement previous literature findings. The current protocols should be updated to include a more realistic skin model, which provides a reliable breast dose estimation.  相似文献   

10.
Ding J  Wang JL 《Biometrics》2008,64(2):546-556
Summary .   In clinical studies, longitudinal biomarkers are often used to monitor disease progression and failure time. Joint modeling of longitudinal and survival data has certain advantages and has emerged as an effective way to mutually enhance information. Typically, a parametric longitudinal model is assumed to facilitate the likelihood approach. However, the choice of a proper parametric model turns out to be more elusive than models for standard longitudinal studies in which no survival endpoint occurs. In this article, we propose a nonparametric multiplicative random effects model for the longitudinal process, which has many applications and leads to a flexible yet parsimonious nonparametric random effects model. A proportional hazards model is then used to link the biomarkers and event time. We use B-splines to represent the nonparametric longitudinal process, and select the number of knots and degrees based on a version of the Akaike information criterion (AIC). Unknown model parameters are estimated through maximizing the observed joint likelihood, which is iteratively maximized by the Monte Carlo Expectation Maximization (MCEM) algorithm. Due to the simplicity of the model structure, the proposed approach has good numerical stability and compares well with the competing parametric longitudinal approaches. The new approach is illustrated with primary biliary cirrhosis (PBC) data, aiming to capture nonlinear patterns of serum bilirubin time courses and their relationship with survival time of PBC patients.  相似文献   

11.
Let categorical data coming from a control group and (r - 1) treated groups be given in an r × c contingency table. A simultaneous test procedure of the (r - 1) hypotheses that the probabilities of all c categories do not differ between the i-th treated group and the control is derived. For small tables and small cell frequencies it is exactly performed by generation of all tables having the given marginal sums. If 2 categories or 2 groups only are given the asymptotic distribution of the test statistic is known; otherwise its distribution may be simulated if the computational expenditure of performing an exact test is too large. By means of a Monte Carlo study it is shown that this method meets its level more reliably and that it has a better power than others.  相似文献   

12.
像元尺度上不确定性对空间景观直观模型模拟的影响   总被引:6,自引:1,他引:6  
LANDIS模型是模拟自然和人为干扰下森林景观变化的空间直观景观模型。模型把景观概念化为由相同大小的像元或样地组成的格网。在每一个像元上,模型要求输入物种和年龄组信息。但是,由于研究区一般由成千上百万个像元构成,不可能通过实际调查获取每一个像元上的物种和年龄组信息。因此,采用了一种基于小班的随机赋值法从森林调查数据中获取每一个像元的物种和年龄组信息。该方法是一种基于概率的方法,会在LANDIS模型模拟的物种和年龄组信息的输入中引入不确定性。为了评价由基于小班的随机赋值法所引入像元尺度上的不确定性对模型模拟结果的影响,用蒙特卡罗模拟法进行不确定性分析。对LANDIS模型模拟的每一个物种,用众数年龄组发生频率来定量化单个像元上年龄组信息的不确定性,用所有像元上的众数年龄组平均发生频率来定量化年龄组信息在像元尺度上总的不确定性。平均发生频率越高,不确定性越低。为了评价基于小班的随机赋值法对景观尺度上模型模拟结果的影响,计算了每一个物种在整个研究区内的面积百分比和聚集度指数。变异系数越大,不确定性越高。对所有物种,年龄组信息不确定性在模型模拟的初期是比较低的(平均发生频率大于10)。种子传播、建群、死亡和火干扰使模型结果的不确定性随模拟时间增加而增加。最后,不确定性达到稳定状态,达到平衡状态的时间与物种寿命接近。此时,初始的物种和年龄组信息不再对模型结果有影响。在景观尺度上,物种分布面积百分比和由聚集度指数所定量化的空间格局并未受像元尺度上不确定性增加的影响。因为LANDIS模型模拟研究的目的在于预测总的景观格局变化,而不是单一的事件,所以,基于小班的随机赋值法可用于LANDIS模型的参数化。  相似文献   

13.
站点CERES-Rice模型区域应用效果和误差来源   总被引:1,自引:1,他引:1  
熊伟 《生态学报》2009,29(4):2003-2009
作物区域模拟是利用有限的空间数据,最大限度地反映出生育期、产量等作物性状的时空变化规律.由于目前的作物模型大多是田间尺度的站点模型,把它运用到区域水平的效果如何研究甚少.文章利用CERES-Rice模型,对作物模型在我国的区域应用效果进行了分析.首先利用田间观测数据在各实验点上对模型进行了详细的站点校准,以验证模型在我国的模拟能力;然后以我国水稻生态区(精确到亚区)为单位,运用均方根差(RMSE)法进行了区域校准和验证;最后利用区域校准后的CERES-Rice模型,模拟了1980~2000年的网格(50km×50km)水稻产量,并与同期农调队调查产量进行统计比较,以验证区域应用的效果,为区域模拟的推广和应用提供参考.结果表明:经过空间校准后的CERES-Rice模型,在水稻的主产区1~4区(占种植面积的95%)模拟的平均产量与调查产量相对均方根差在22%以内,两者的符合度也较好,个别区域(5、6) RMSE%在24%~30%之间;1980~2000年水稻各产区模拟的平均产量与调查产量随时间变化趋势也具有一定的一致性;全国1896个网格中,大部分网格(71.01%)模拟的21年水稻年产量与调查产量的RMSE%在30%之内,且大部分分布在水稻主产区,考虑到水稻种植面积的权重后,认为利用区域校准和验证后的CERES-Rice模型进行水稻区域模拟,可以反映出产量的时空分布特征,能够为宏观决策提供相应的信息.但目前区域模拟中还存在着一定的误差,有待今后进一步研究.  相似文献   

14.
15.
We propose Metropolis-Hastings sampling methods for estimating the exact conditional p-value for tests of goodness of fit of log-linear models for mortality rates and standardized mortality ratios. We focus on two-way tables, where the required conditional distribution is a multivariate noncentral hypergeometric distribution with known noncentrality parameter. Two examples are presented: a 2 x 3 table, where the exact results, obtained by enumeration, are available for comparison, and a 9 x 7 table, where Monte Carlo methods provide the only feasible approach for exact inference.  相似文献   

16.
Inventory data and characterization factors in life cycle assessment (LCA) contain considerable uncertainty. The most common method of parameter uncertainty propagation to the impact scores is Monte Carlo simulation, which remains a resource‐intensive option—probably one of the reasons why uncertainty assessment is not a regular step in LCA. An analytical approach based on Taylor series expansion constitutes an effective means to overcome the drawbacks of the Monte Carlo method. This project aimed to test the approach on a real case study, and the resulting analytical uncertainty was compared with Monte Carlo results. The sensitivity and contribution of input parameters to output uncertainty were also analytically calculated. This article outlines an uncertainty analysis of the comparison between two case study scenarios. We conclude that the analytical method provides a good approximation of the output uncertainty. Moreover, the sensitivity analysis reveals that the uncertainty of the most sensitive input parameters was not initially considered in the case study. The uncertainty analysis of the comparison of two scenarios is a useful means of highlighting the effects of correlation on uncertainty calculation. This article shows the importance of the analytical method in uncertainty calculation, which could lead to a more complete uncertainty analysis in LCA practice.  相似文献   

17.
Efficient Bayesian inference for Gaussian copula regression models   总被引:4,自引:0,他引:4  
  相似文献   

18.
We present a method for estimating the parameters in random effects models for survival data when covariates are subject to missingness. Our method is more general than the usual frailty model as it accommodates a wide range of distributions for the random effects, which are included as an offset in the linear predictor in a manner analogous to that used in generalized linear mixed models. We propose using a Monte Carlo EM algorithm along with the Gibbs sampler to obtain parameter estimates. This method is useful in reducing the bias that may be incurred using complete-case methods in this setting. The methodology is applied to data from Eastern Cooperative Oncology Group melanoma clinical trials in which observations were believed to be clustered and several tumor characteristics were not always observed.  相似文献   

19.
Flexible parametric measurement error models   总被引:2,自引:0,他引:2  
Inferences in measurement error models can be sensitive to modeling assumptions. Specifically, if the model is incorrect, the estimates can be inconsistent. To reduce sensitivity to modeling assumptions and yet still retain the efficiency of parametric inference, we propose using flexible parametric models that can accommodate departures from standard parametric models. We use mixtures of normals for this purpose. We study two cases in detail: a linear errors-in-variables model and a change-point Berkson model.  相似文献   

20.
Quantitative uncertainty analysis has become a common component of risk assessments. In risk assessment models, the most robust method for propagating uncertainty is Monte Carlo simulation. Many software packages available today offer Monte Carlo capabilities while requiring minimal learning time, computational time, and/or computer memory. This paper presents an evalu ation of six software packages in the context of risk assessment: Crystal Ball, @Risk, Analytica, Stella II, PRISM, and Susa-PC. Crystal Ball and @Risk are spreadsheet based programs; Analytica and Stella II are multi-level, influence diagram based programs designed for the construction of complex models; PRISM and Susa-PC are both public-domain programs designed for incorpo rating uncertainty and sensitivity into any model written in Fortran. Each software package was evaluated on the basis of five criteria, with each criterion having several sub-criteria. A ‘User Preferences Table’ was also developed for an additional comparison of the software packages. The evaluations were based on nine weeks of experimentation with the software packages including use of the associated user manuals and test of the software through the use of example problems. The results of these evaluations indicate that Stella II has the most extensive modeling capabilities and can handle linear differential equations. Crystal Ball has the best input scheme for entering uncertain parameters and the best reference materials. @Risk offers a slightly better standard output scheme and requires a little less learning time. Susa-PC has the most options for detailed statistical analysis of the results, such as multiple options for a sensitivity analysis and sophisticated options for inputting correlations. Analytica is a versatile, menu- and graphics-driven package, while PRISM is a more specialized and less user friendly program. When choosing between software packages for uncertainty and sensitivity analysis, the choice largely depends on the specifics of the problem being modeled. However, for risk assessment problems that can be implemented on a spreadsheet, Crystal Ball is recommended because it offers the best input options, a good output scheme, adequate uncertainty and sensitivity analysis, superior reference materials, and an intuitive spreadsheet basis while requiring very little memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号