首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
MOTIVATION: Metabolic flux analysis via a (13)C tracer experiment has been achieved using a Monte Carlo method with the assumption of system noise as Gaussian noise. However, an unbiased flux analysis requires the estimation of fluxes and metabolites jointly without the restriction on the assumption of Gaussian noise. The flux distributions under such a framework can be freely obtained with various system noise and uncertainty models. RESULTS: In this paper, a stochastic generative model of the metabolic system is developed. Following this, the Markov Chain Monte Carlo (MCMC) approach is applied to flux distribution analysis. The disturbances and uncertainties in the system are simplified as truncated Gaussian multiplicative models. The performance in a real metabolic system is illustrated by the application to the central metabolism of Corynebacterium glutamicum. The flux distributions are illustrated and analyzed in order to understand the underlying flux activities in the system. AVAILABILITY: Algorithms are available upon request.  相似文献   

2.
We use bootstrap simulation to characterize uncertainty in parametric distributions, including Normal, Lognormal, Gamma, Weibull, and Beta, commonly used to represent variability in probabilistic assessments. Bootstrap simulation enables one to estimate sampling distributions for sample statistics, such as distribution parameters, even when analytical solutions are not available. Using a two-dimensional framework for both uncertainty and variability, uncertainties in cumulative distribution functions were simulated. The mathematical properties of uncertain frequency distributions were evaluated in a series of case studies during which the parameters of each type of distribution were varied for sample sizes of 5, 10, and 20. For positively skewed distributions such as Lognormal, Weibull, and Gamma, the range of uncertainty is widest at the upper tail of the distribution. For symmetric unbounded distributions, such as Normal, the uncertainties are widest at both tails of the distribution. For bounded distributions, such as Beta, the uncertainties are typically widest in the central portions of the distribution. Bootstrap simulation enables complex dependencies between sampling distributions to be captured. The effects of uncertainty, variability, and parameter dependencies were studied for several generic functional forms of models, including models in which two-dimensional random variables are added, multiplied, and divided, to show the sensitivity of model results to different assumptions regarding model input distributions, ranges of variability, and ranges of uncertainty and to show the types of errors that may be obtained from mis-specification of parameter dependence. A total of 1,098 case studies were simulated. In some cases, counter-intuitive results were obtained. For example, the point value of the 95th percentile of uncertainty for the 95th percentile of variability of the product of four Gamma or Weibull distributions decreases as the coefficient of variation of each model input increases and, therefore, may not provide a conservative estimate. Failure to properly characterize parameter uncertainties and their dependencies can lead to orders-of-magnitude mis-estimates of both variability and uncertainty. In many cases, the numerical stability of two-dimensional simulation results was found to decrease as the coefficient of variation of the inputs increases. We discuss the strengths and limitations of bootstrap simulation as a method for quantifying uncertainty due to random sampling error.  相似文献   

3.
Summary Doubling time has been widely used to represent the growth pattern of cells. A traditional method for finding the doubling time is to apply gray-scaled cells, where the logarithmic transformed scale is used. As an alternative statistical method, the log-linear model was recently proposed, for which actual cell numbers are used instead of the transformed gray-scaled cells. In this paper, I extend the log-linear model and propose the extended log-linear model. This model is designed for extra-Poisson variation, where the log-linear model produces the less appropriate estimate of the doubling time. Moreover, I compare statistical properties of the gray-scaled method, the log-linear model, and the extended log-linear model. For this purpose, I perform a Monte Carlo simulation study with three data-generating models: the additive error model, the multiplicative error model, and the overdispersed Poisson model. From the simulation study, I found that the gray-scaled method highly depends on the normality assumption of the gray-scaled cells; hence, this method is appropriate when the error model is multiplicative with the log-normally distributed errors. However, it is less efficient for other types of error distributions, especially when the error model is additive or the errors follow the Poisson distribution. The estimated standard error for the doubling time is not accurate in this case. The log-linear model was found to be efficient when the errors follow the Poisson distribution or nearly Poisson distribution. The efficiency of the log-linear model was decreased accordingly as the overdispersion increased, compared to the extended log-linear model. When the error model is additive or multiplicative with Gamma-distributed errors, the log-linear model is more efficient than the gray-scaled method. The extended log-linear model performs well overall for all three data-generating models. The loss of efficiency of the extended log-linear model is observed only when the error model is multiplicative with log-normally distributed errors, where the gray-scaled method is appropriate. However, the extended log-linear model is more efficient than the log-linear model in this case.  相似文献   

4.
Analytica is an easy-to-learn, easy-to-use modeling tool that allows modelers to represent what they know through influence diagrams. These diagrams show which model quantities are derived from which others and indicate by shape and color the roles that different nodes play in the model, e.g., decision variables, chance variables, outcome variables, deterministic functions, or abstractions of sub-models. A wide variety of built-in probability distributions allow uncertainties about input values to be painlessly specified and propagated through the model via a fast, professional Monte-Carlo simulation engine. Resulting uncertainties and sensitivities about any quantity in the model can be viewed with admirable ease and flexibility by selecting among probability density, cumulative distribution, confidence band, sensitivity analysis, and other displays. Analytica features clever hierarchical model management and navigation features that serious model-builders will appreciate and that novice modelers will learn from as they are led to develop well-structured, well-documented models. Simple continuous (compartmental-flow) and Markov chain dynamic simulation models can be built by paying some detailed attention to arrays and indices, although Analytica does not support true discrete-event simulation. Within its chosen domain—uncertainty propagation through influence diagram models—Analytica is by far the easiest and best tool that we have seen.  相似文献   

5.
Summary .  Motivated by molecular data on female premutation carriers of the fragile X mental retardation 1 ( FMR1 ) gene, we present a new method of covariate adjusted correlation analysis to examine the association of messenger RNA (mRNA) and number of CGG repeat expansion in the  FMR1  gene. The association between the molecular variables in female carriers needs to adjust for activation ratio (ActRatio), a measure which accounts for the protective effects of one normal X chromosome in females carriers. However, there are inherent uncertainties in the exact effects of ActRatio on the molecular measures of interest. To account for these uncertainties, we develop a flexible adjustment that accommodates both additive and multiplicative effects of ActRatio nonparametrically. The proposed adjusted correlation uses local conditional correlations, which are local method of moments estimators, to estimate the Pearson correlation between two variables adjusted for a third observable covariate. The local method of moments estimators are averaged to arrive at the final covariate adjusted correlation estimator, which is shown to be consistent. We also develop a test to check the nonparametric joint additive and multiplicative adjustment form. Simulation studies illustrate the efficacy of the proposed method. (Application to  FMR1  premutation data on 165 female carriers indicates that the association between mRNA and CGG repeat after adjusting for ActRatio is stronger.) Finally, the results provide independent support for a specific jointly additive and multiplicative adjustment form for ActRatio previously proposed in the literature.  相似文献   

6.
Two popular models of absence of synergism in epidemiologic cohort studies are analyzed and compared. It is shown that the statistical concept of the union of independent events that traditionally has given rise to the “additive” model of relative risk can also generate the “multiplicative” model of relative risk. In fact, the same set of approximating conditions can be used to generate both models, which suggests a lack of identifiability under the traditional approach. An alternate approach is proposed in this paper. The new approach does not require the assumption that background risk factors are independent from causal agents of interest. The concept of “dose additivity” is discussed.  相似文献   

7.
Kim J  Tsien RW  Alger BE 《PloS one》2012,7(5):e37364
Homeostatic scaling of synaptic strengths is essential for maintenance of network "gain", but also poses a risk of losing the distinctions among relative synaptic weights, which are possibly cellular correlates of memory storage. Multiplicative scaling of all synapses has been proposed as a mechanism that would preserve the relative weights among them, because they would all be proportionately adjusted. It is crucial for this hypothesis that all synapses be affected identically, but whether or not this actually occurs is difficult to determine directly. Mathematical tests for multiplicative synaptic scaling are presently carried out on distributions of miniature synaptic current amplitudes, but the accuracy of the test procedure has not been fully validated. We now show that the existence of an amplitude threshold for empirical detection of miniature synaptic currents limits the use of the most common method for detecting multiplicative changes. Our new method circumvents the problem by discarding the potentially distorting subthreshold values after computational scaling. This new method should be useful in assessing the underlying neurophysiological nature of a homeostatic synaptic scaling transformation, and therefore in evaluating its functional significance.  相似文献   

8.
Climate change vulnerability assessment is a complex form of risk assessment which accounts for both geophysical and socio-economic components of risk. In indicator-based vulnerability assessment (IBVA), indicators are used to rank the vulnerabilities of socio-ecological systems (SESs). The predominant aggregation approach in the literature, sometimes based on multi-attribute utility theory (MAUT), typically builds a global-scale, utility function based on weighted summation, to generate rankings. However, the corresponding requirement for additive independence and complete knowledge of system interactions by analyst are rarely if ever satisfied in IBVA.We build an analogy between the structures of Multi-Criteria Decision Analysis (MCDA) and IBVA problems and show that a set of techniques called Outranking Methods, developed in MCDA to deal with criteria incommensurability, data uncertainty and preference imprecision, offer IBVA a sound alternative to additive or multiplicative aggregation. We reformulate IBVA problems within an outranking framework, define thresholds of difference and use an outranking method, ELECTRE III, to assess the relative vulnerability to heat stress of 15 local government areas in metropolitan Sydney. We find that the ranking outcomes are robust and argue that an outranking approach is better suited for assessments characterized by a mix of qualitative, semi-quantitative and quantitative indicators, threshold effects and uncertainties about the exact relationships between indicators and vulnerability.  相似文献   

9.
We consider population genetics models where selection acts at a set of unlinked loci. It is known that if the fitness of an individual is multiplicative across loci, then these loci are independent. We consider general selection models, but assume parent-independent mutation at each locus. For such a model, the joint stationary distribution of allele frequencies is proportional to the stationary distribution under neutrality multiplied by a known function of the mean fitness of the population. We further show how knowledge of this stationary distribution enables direct simulation of the genealogy of a sample at a single-locus. For a specific selection model appropriate for complex disease genes, we use simulation to determine what features of the genealogy differ between our general selection model and a multiplicative model.  相似文献   

10.
BIOMOD is a computer platform for ensemble forecasting of species distributions, enabling the treatment of a range of methodological uncertainties in models and the examination of species-environment relationships. BIOMOD includes the ability to model species distributions with several techniques, test models with a wide range of approaches, project species distributions into different environmental conditions (e.g. climate or land use change scenarios) and dispersal functions. It allows assessing species temporal turnover, plot species response curves, and test the strength of species interactions with predictor variables. BIOMOD is implemented in R and is a freeware, open source, package.  相似文献   

11.
The traditional q1 * methodology for constructing upper confidence limits (UCLs) for the low-dose slopes of quantal dose-response functions has two limitations: (i) it is based on an asymptotic statistical result that has been shown via Monte Carlo simulation not to hold in practice for small, real bioassay experiments (Portier and Hoel, 1983); and (ii) it assumes that the multistage model (which represents cumulative hazard as a polynomial function of dose) is correct. This paper presents an uncertainty analysis approach for fitting dose-response functions to data that does not require specific parametric assumptions or depend on asymptotic results. It has the advantage that the resulting estimates of the dose-response function (and uncertainties about it) no longer depend on the validity of an assumed parametric family nor on the accuracy of the asymptotic approximation. The method derives posterior densities for the true response rates in the dose groups, rather than deriving posterior densities for model parameters, as in other Bayesian approaches (Sielken, 1991), or resampling the observed data points, as in the bootstrap and other resampling methods. It does so by conditioning constrained maximum-entropy priors on the observed data. Monte Carlo sampling of the posterior (constrained, conditioned) probability distributions generate values of response probabilities that might be observed if the experiment were repeated with very large sample sizes. A dose-response curve is fit to each such simulated dataset. If no parametric model has been specified, then a generalized representation (e.g., a power-series or orthonormal polynomial expansion) of the unknown dose-response function is fit to each simulated dataset using “model-free” methods. The simulation-based frequency distribution of all the dose-response curves fit to the simulated datasets yields a posterior distribution function for the low-dose slope of the dose-response curve. An upper confidence limit on the low-dose slope is obtained directly from this posterior distribution. This “Data Cube” procedure is illustrated with a real dataset for benzene, and is seen to produce more policy-relevant insights than does the traditional q1 * methodology. For example, it shows how far apart are the 90%, 95%, and 99% limits and reveals how uncertainty about total and incremental risk vary with dose level (typically being dominated at low doses by uncertainty about the response of the control group, and being dominated at high doses by sampling variability). Strengths and limitations of the Data Cube approach are summarized, and potential decision-analytic applications to making better informed risk management decisions are briefly discussed.  相似文献   

12.
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation‐extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error‐prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.  相似文献   

13.
Aim The study and prediction of species–environment relationships is currently mainly based on species distribution models. These purely correlative models neglect spatial population dynamics and assume that species distributions are in equilibrium with their environment. This causes biased estimates of species niches and handicaps forecasts of range dynamics under environmental change. Here we aim to develop an approach that statistically estimates process‐based models of range dynamics from data on species distributions and permits a more comprehensive quantification of forecast uncertainties. Innovation We present an approach for the statistical estimation of process‐based dynamic range models (DRMs) that integrate Hutchinson's niche concept with spatial population dynamics. In a hierarchical Bayesian framework the environmental response of demographic rates, local population dynamics and dispersal are estimated conditional upon each other while accounting for various sources of uncertainty. The method thus: (1) jointly infers species niches and spatiotemporal population dynamics from occurrence and abundance data, and (2) provides fully probabilistic forecasts of future range dynamics under environmental change. In a simulation study, we investigate the performance of DRMs for a variety of scenarios that differ in both ecological dynamics and the data used for model estimation. Main conclusions Our results demonstrate the importance of considering dynamic aspects in the collection and analysis of biodiversity data. In combination with informative data, the presented framework has the potential to markedly improve the quantification of ecological niches, the process‐based understanding of range dynamics and the forecasting of species responses to environmental change. It thereby strengthens links between biogeography, population biology and theoretical and applied ecology.  相似文献   

14.
Multi-pathway risk assessments (MPRAs) of contaminant emissions to the atmosphere consider both direct exposures, via ambient air, and indirect exposures, via deposition to land and water. MPRAs embody numerous interconnected models and parameters. Concatenation of many multiplicative and incompletely defined assumptions and inputs can result in risk estimates with considerable uncertainties, which are difficult to quantify and elucidate. Here, three MPRA case-studies approach uncertainties in ways that better inform context-specific judgments of risk. In the first case, default values predicted implausibly large impacts; substitution of site-specific data within conservative methods resulted in reasonable and intuitive worst-case estimates. In the second, a simpler, clearly worst-case water quality model sufficed to demonstrate acceptable risks. In the third case, exposures were intentionally and transparently overestimated. Choices made within particular MPRAs depend on availability of data as suitable replacements for default assumptions, regulatory requirements, and thoughtful consideration of the concerns of interested stakeholders. Explicit consideration of the biases inherent in each risk assessment lends greater credibility to the assessment results, and can form the bases for evidence-based decision-making.  相似文献   

15.
We consider non-neutral models for unlinked loci, where the fitness of a chromosome or individual is not multiplicative across loci. Such models are suitable for many complex diseases, where there are gene-interactions. We derive a genealogical process for such models, called the complex selection graph (CSG). This coalescent-type process is related to the ancestral selection graph, and is derived from the ancestral influence graph by considering the limit as the recombination rate between loci gets large. We analyse the CSG both theoretically and via simulation. The main results are that the gene-interactions do not produce linkage disequilibrium, but do produce dependencies in allele frequencies between loci. For small selection rates, the distributions of the genealogy and the allele frequencies at a single locus are well-approximated by their distributions under a single locus model, where the fitness of each allele is the average of the true fitnesses of that allele with respect to the distribution of alleles at other loci.  相似文献   

16.
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low‐quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision‐making framework will result in better‐informed, more robust decisions.  相似文献   

17.
 Results of multi-environment trials to evaluate new plant cultivars may be displayed in a two-way table of genotypes by environments. Different estimators are available to fill the cells of such tables. It has been shown previously that the predictive accuracy of the simple genotype by environment mean is often lower than that of other estimators, e.g. least-squares estimators based on multiplicative models, such as the additive main effects multiplicative interaction (AMMI) model, or empirical best-linear unbiased predictors (BLUPs) based on a two-way analysis-of-variance (ANOVA) model. This paper proposes a method to obtain BLUPs based on models with multiplicative terms. It is shown by cross-validation using five real data sets (oilseed rape, Brassica napus L.) that the predictive accuracy of BLUPs based on models with multiplicative terms may be better than that of least-squares estimators based on the same models and also better than BLUPs based on ANOVA models. Received: 18 October 1997 / Accepted: 31 March 1998  相似文献   

18.
The recommendation of new plant varieties for commercial use requires reliable and accurate predictions of the average yield of each variety across a range of target environments and knowledge of important interactions with the environment. This information is obtained from series of plant variety trials, also known as multi-environment trials (MET). Cullis, Gogel, Verbyla, and Thompson (1998) presented a spatial mixed model approach for the analysis of MET data. In this paper we extend the analysis to include multiplicative models for the variety effects in each environment. The multiplicative model corresponds to that used in the multivariate technique of factor analysis. It allows a separate genetic variance for each environment and provides a parsimonious and interpretable model for the genetic covariances between environments. The model can be regarded as a random effects analogue of AMMI (additive main effects and multiplicative interactions). We illustrate the method using a large set of MET data from a South Australian barley breeding program.  相似文献   

19.
There has been much work done in nest survival analysis using the maximum likelihood (ML) method. The ML method suffers from the instability of numerical calculations when models having a large number of unknown parameters are used. A Bayesian approach of model fitting is developed to estimate age-specific survival rates for nesting studies using a large class of prior distributions. The computation is done by Gibbs sampling. Some latent variables are introduced to simplify the full conditional distributions. The method is illustrated using both a real and a simulated data set. Results indicate that Bayesian analysis provides stable and accurate estimates of nest survival rates.  相似文献   

20.
Clustered interval‐censored data commonly arise in many studies of biomedical research where the failure time of interest is subject to interval‐censoring and subjects are correlated for being in the same cluster. A new semiparametric frailty probit regression model is proposed to study covariate effects on the failure time by accounting for the intracluster dependence. Under the proposed normal frailty probit model, the marginal distribution of the failure time is a semiparametric probit model, the regression parameters can be interpreted as both the conditional covariate effects given frailty and the marginal covariate effects up to a multiplicative constant, and the intracluster association can be summarized by two nonparametric measures in simple and explicit form. A fully Bayesian estimation approach is developed based on the use of monotone splines for the unknown nondecreasing function and a data augmentation using normal latent variables. The proposed Gibbs sampler is straightforward to implement since all unknowns have standard form in their full conditional distributions. The proposed method performs very well in estimating the regression parameters as well as the intracluster association, and the method is robust to frailty distribution misspecifications as shown in our simulation studies. Two real‐life data sets are analyzed for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号