首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Purpose

Identification of key inputs and their effect on results from Life Cycle Assessment (LCA) models is fundamental. Because parameter importance varies greatly between cases due to the interaction of sensitivity and uncertainty, these features should never be defined a priori. However, exhaustive parametrical uncertainty analyses may potentially be complicated and demanding, both with analytical and sampling methods. Therefore, we propose a systematic method for selection of critical parameters based on a simplified analytical formulation that unifies the concepts of sensitivity and uncertainty in a Global Sensitivity Analysis (GSA) framework.

Methods

The proposed analytical method based on the calculation of sensitivity coefficients (SC) is evaluated against Monte Carlo sampling on traditional uncertainty assessment procedures, both for individual parameters and for full parameter sets. Three full-scale waste management scenarios are modelled with the dedicated waste LCA model EASETECH and a full range of ILCD recommended impact categories. Common uncertainty ranges of 10 % are used for all parameters, which we assume to be normally distributed. The applicability of the concepts of additivity of variances and GSA is tested on results from both uncertainty propagation methods. Then, we examine the differences in discernibility analyses results carried out with varying numbers of sampling points and parameters.

Results and discussion

The proposed analytical method complies with the Monte Carlo results for all scenarios and impact categories, but offers substantially simpler mathematical formulation and shorter computation times. The coefficients of variation obtained with the analytical method and Monte Carlo differ only by 1 %, indicating that the analytical method provides a reliable representation of uncertainties and allows determination of whether a discernibility analysis is required. The additivity of variances and the GSA approach show that the uncertainty in results is determined by a limited set of important parameters. The results of the discernibility analysis based on these critical parameters vary only by 1 % from discernibility analyses based on the full set, but require significantly fewer Monte Carlo runs.

Conclusions

The proposed method and GSA framework provide a fast and valuable approximation for uncertainty quantification. Uncertainty can be represented sparsely by contextually identifying important parameters in a systematic manner. The proposed method integrates with existing step-wise approaches for uncertainty analysis by introducing a global importance analysis before uncertainty propagation.
  相似文献   

2.
Application of uncertainty and variability in LCA   总被引:1,自引:0,他引:1  
As yet, the application of an uncertainty and variability analysis is not common practice in LCAs. A proper analysis will be facilitated when it is clear which types of uncertainties and variabilities exist in LCAs and which tools are available to deal with them. Therefore, a framework is developed to classify types of uncertainty and variability in LCAs. Uncertainty is divided in (1) parameter uncertainty, (2) model uncertainty, and (3) uncertainty due to choices, while variability covers (4) spatial variability, (5) temporal variability, and (6) variability between objects and sources. A tool to deal with parameter uncertainty and variability between objects and sources in both the inventory and the impact assessment is probabilistic simulation. Uncertainty due to choices can be dealt with in a scenario analysis or reduced by standardisation and peer review. The feasibility of dealing with temporal and spatial variability is limited, implying model uncertainty in LCAs. Other model uncertainties can be reduced partly by more sophisticated modelling, such as the use of non-linear inventory models in the inventory and multi media models in the characterisation phase.  相似文献   

3.
ABSTRACT: BACKGROUND: A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. RESULTS: A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. CONCLUSIONS: The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.  相似文献   

4.
Often there is substantial uncertainty in the selection of confounderswhen estimating the association between an exposure and health.We define this type of uncertainty as `adjustment uncertainty'.We propose a general statistical framework for handling adjustmentuncertainty in exposure effect estimation for a large numberof confounders, we describe a specific implementation, and wedevelop associated visualization tools. Theoretical resultsand simulation studies show that the proposed method providesconsistent estimators of the exposure effect and its variance.We also show that, when the goal is to estimate an exposureeffect accounting for adjustment uncertainty, Bayesian modelaveraging with posterior model probabilities approximated usinginformation criteria can fail to estimate the exposure effectand can over- or underestimate its variance. We compare ourapproach to Bayesian model averaging using time series dataon levels of fine particulate matter and mortality.  相似文献   

5.
Information about the enzyme kinetics in a metabolic network will enable understanding of the function of the network and quantitative prediction of the network responses to genetic and environmental perturbations. Despite recent advances in experimental techniques, such information is limited and existing experimental data show extensive variation and they are based on in vitro experiments. In this article, we present a computational framework based on the well-established (log)linear formalism of metabolic control analysis. The framework employs a Monte Carlo sampling procedure to simulate the uncertainty in the kinetic data and applies statistical tools for the identification of the rate-limiting steps in metabolic networks. We applied the proposed framework to a branched biosynthetic pathway and the yeast glycolysis pathway. Analysis of the results allowed us to interpret and predict the responses of metabolic networks to genetic and environmental changes, and to gain insights on how uncertainty in the kinetic mechanisms and kinetic parameters propagate into the uncertainty in predicting network responses. Some of the practical applications of the proposed approach include the identification of drug targets for metabolic diseases and the guidance for design strategies in metabolic engineering for the purposeful manipulation of the metabolism of industrial organisms.  相似文献   

6.
Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause‐specific mortality provide an example of implicit use of expert knowledge when causes‐of‐death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause‐specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause‐of‐death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event‐time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause‐of‐death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause‐of‐death assignment in modeling of cause‐specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause‐specific survival data for white‐tailed deer, and compared results. We demonstrate that model selection results changed between the two approaches, and incorporating observer knowledge in cause‐of‐death increased the variability associated with parameter estimates when compared to the traditional approach. These differences between the two approaches can impact reported results, and therefore, it is critical to explicitly incorporate expert knowledge in statistical methods to ensure rigorous inference.  相似文献   

7.
Standard bioprocess conditions have been widely applied for the microbial conversion of raw material to essential industrial products. Successful metabolic engineering (ME) strategies require a comprehensive framework to manage the complexity embedded in cellular metabolism, to explore the impacts of bioprocess conditions on the cellular responses, and to deal with the uncertainty of the physiochemical parameters. We have recently developed a computational and statistical framework that is based on Metabolic Control Analysis and uses a Monte Carlo method to simulate the uncertainty in the values of the system parameters [Wang, L., Birol, I., Hatzimanikatis, V., 2004. Metabolic control analysis under uncertainty: framework development and case studies. Biophys. J. 87(6), 3750-3763]. In this work, we generalize this framework to incorporate the central cellular processes, such as cell growth, and different bioprocess conditions, such as different types of bioreactors. The framework provides the mathematical basis for the quantification of the interactions between intracellular metabolism and extracellular conditions, and it is readily applicable to the identification of optimal ME targets for the improvement of industrial processes [Wang, L., Hatzimanikatis, V., 2005. Metabolic engineering under uncertainty. II: analysis of yeast metabolism. Submitted].  相似文献   

8.
We use bootstrap simulation to characterize uncertainty in parametric distributions, including Normal, Lognormal, Gamma, Weibull, and Beta, commonly used to represent variability in probabilistic assessments. Bootstrap simulation enables one to estimate sampling distributions for sample statistics, such as distribution parameters, even when analytical solutions are not available. Using a two-dimensional framework for both uncertainty and variability, uncertainties in cumulative distribution functions were simulated. The mathematical properties of uncertain frequency distributions were evaluated in a series of case studies during which the parameters of each type of distribution were varied for sample sizes of 5, 10, and 20. For positively skewed distributions such as Lognormal, Weibull, and Gamma, the range of uncertainty is widest at the upper tail of the distribution. For symmetric unbounded distributions, such as Normal, the uncertainties are widest at both tails of the distribution. For bounded distributions, such as Beta, the uncertainties are typically widest in the central portions of the distribution. Bootstrap simulation enables complex dependencies between sampling distributions to be captured. The effects of uncertainty, variability, and parameter dependencies were studied for several generic functional forms of models, including models in which two-dimensional random variables are added, multiplied, and divided, to show the sensitivity of model results to different assumptions regarding model input distributions, ranges of variability, and ranges of uncertainty and to show the types of errors that may be obtained from mis-specification of parameter dependence. A total of 1,098 case studies were simulated. In some cases, counter-intuitive results were obtained. For example, the point value of the 95th percentile of uncertainty for the 95th percentile of variability of the product of four Gamma or Weibull distributions decreases as the coefficient of variation of each model input increases and, therefore, may not provide a conservative estimate. Failure to properly characterize parameter uncertainties and their dependencies can lead to orders-of-magnitude mis-estimates of both variability and uncertainty. In many cases, the numerical stability of two-dimensional simulation results was found to decrease as the coefficient of variation of the inputs increases. We discuss the strengths and limitations of bootstrap simulation as a method for quantifying uncertainty due to random sampling error.  相似文献   

9.
Identifying biomarkers that are indicative of a phenotypic state is difficult because of the amount of natural variability which exists in any population. While there are many different algorithms to select biomarkers, previous investigation shows the sensitivity and flexibility of support vector machines (SVM) make them an attractive candidate. Here we evaluate the ability of support vector machine recursive feature elimination (SVM-RFE) to identify potential metabolic biomarkers in liquid chromatography mass spectrometry untargeted metabolite datasets. Two separate experiments are considered, a low variance (low biological noise) prokaryotic stress experiment, and a high variance (high biological noise) mammalian stress experiment. For each experiment, the phenotypic response to stress is metabolically characterized. SVM-based classification and metabolite ranking is undertaken using a systematically reduced number of biological replicates to evaluate the impact of sample size on biomarker reproducibility and robustness. Our results indicate the highest ranked 1 % of metabolites, the most predictive of the physiological state, were identified by SVM-RFE even when the number of training examples was small (≥3) and the coefficient of variation was high (>0.5). An accuracy analysis shows filtering with recursive feature elimination measurably improves SVM classification accuracy, an effect that is pronounced when the number of training examples is small. These results indicate that SVM-RFE can be successful at biomarker identification even in challenging scenarios where the training examples are noisy and the number of biological replicates is low.  相似文献   

10.
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low‐quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision‐making framework will result in better‐informed, more robust decisions.  相似文献   

11.
This article evaluates the current and future potential of batch and continuous cell culture technologies via a case study based on the commercial manufacture of monoclonal antibodies. The case study compares fed‐batch culture to two perfusion technologies: spin‐filter perfusion and an emerging perfusion technology utilizing alternating tangential flow (ATF) perfusion. The operational, economic, and environmental feasibility of whole bioprocesses based on these systems was evaluated using a prototype dynamic decision‐support tool built at UCL encompassing process economics, discrete‐event simulation and uncertainty analysis, and combined with a multi‐attribute decision‐making technique so as to enable a holistic assessment. The strategies were compared across a range of scales and titres so as to visualize how their ranking changes in different industry scenarios. The deterministic analysis indicated that the ATF perfusion strategy has the potential to offer cost of goods savings of 20% when compared to conventional fed‐batch manufacturing processes when a fivefold increase in maximum viable cell densities was assumed. Savings were also seen when the ATF cell density dropped to a threefold increase over the fed‐batch strategy for most combinations of titres and production scales. In contrast, the fed‐batch strategy performed better in terms of environmental sustainability with a lower water and consumable usage profile. The impact of uncertainty and failure rates on the feasibility of the strategies was explored using Monte Carlo simulation. The risk analysis results demonstrated the enhanced robustness of the fed‐batch process but also highlighted that the ATF process was still the most cost‐effective option even under uncertainty. The multi‐attribute decision‐making analysis provided insight into the limited use of spin‐filter perfusion strategies in industry. The resulting sensitivity spider plots enabled identification of the critical ratio of weightings of economic and operational benefits that affect the choice between ATF perfusion and fed‐batch strategies. Biotechnol. Bioeng. 2013; 110: 206–219. © 2012 Wiley Periodicals, Inc.  相似文献   

12.
We consider covariate selection and the ensuing model uncertainty aspects in the context of Cox regression. The perspective we take is probabilistic, and we handle it within a Bayesian framework. One of the critical elements in variable/model selection is choosing a suitable prior for model parameters. Here, we derive the so-called conventional prior approach and propose a comprehensive implementation that results in an automatic procedure. Our simulation studies and real applications show improvements over existing literature. For the sake of reproducibility but also for its intrinsic interest for practitioners, a web application requiring minimum statistical knowledge implements the proposed approach.  相似文献   

13.
Natural populations often show enhanced genetic drift consistent with a strong skew in their offspring number distribution. The skew arises because the variability of family sizes is either inherently strong or amplified by population expansions. The resulting allele-frequency fluctuations are large and, therefore, challenge standard models of population genetics, which assume sufficiently narrow offspring distributions. While the neutral dynamics backward in time can be readily analyzed using coalescent approaches, we still know little about the effect of broad offspring distributions on the forward-in-time dynamics, especially with selection. Here, we employ an asymptotic analysis combined with a scaling hypothesis to demonstrate that over-dispersed frequency trajectories emerge from the competition of conventional forces, such as selection or mutations, with an emerging time-dependent sampling bias against the minor allele. The sampling bias arises from the characteristic time-dependence of the largest sampled family size within each allelic type. Using this insight, we establish simple scaling relations for allele-frequency fluctuations, fixation probabilities, extinction times, and the site frequency spectra that arise when offspring numbers are distributed according to a power law.  相似文献   

14.
Composite indicators (CIs) are increasingly used to measure and track environmental systems. However, they have faced criticism for not accounting for uncertainties and their often arbitrary nature. This review highlights methodological challenges and uncertainties involved in creating CIs and provides advice on how to improve future CI development in practice. Linguistic and epistemic uncertainties enter CIs at different stages of development and may be amplified or reduced based on subjective decisions during construction. Lack of transparency about why decisions were made can risk impeding proper review and iterative development. Research on uncertainty in CIs currently focuses on how different construction decisions affect the overall results and is explored using sensitivity and uncertainty analysis. Much less attention is given to uncertainties arising from the theoretical framework underpinning the CI, and the sub-indicator selection process. This often lacks systematic rigour, repeatability and clarity. We recommend use of systems modelling as well as systematic elicitation and engagement during CI development in order to address these issues. Composite indicators make trends in complex environmental systems accessible to wider stakeholder groups, including policy makers. Without proper discussion and exposure of uncertainty, however, they risk misleading their users through false certainty or misleading interpretations. This review offers guidance for future environmental CI construction and users of existing CIs, hence supporting their iterative development and effective use in policy-making.  相似文献   

15.
MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.  相似文献   

16.
Climate change will modify forest pest outbreak characteristics, although there are disagreements regarding the specifics of these changes. A large part of this variability may be attributed to model specifications. As a case study, we developed a consensus model predicting spruce budworm (SBW, Choristoneura fumiferana [Clem.]) outbreak duration using two different predictor data sets and six different correlative methods. The model was used to project outbreak duration and the uncertainty associated with using different data sets and correlative methods (=model‐specification uncertainty) for 2011–2040, 2041–2070 and 2071–2100, according to three forcing scenarios (RCP 2.6, RCP 4.5 and RCP 8.5). The consensus model showed very high explanatory power and low bias. The model projected a more important northward shift and decrease in outbreak duration under the RCP 8.5 scenario. However, variation in single‐model projections increases with time, making future projections highly uncertain. Notably, the magnitude of the shifts in northward expansion, overall outbreak duration and the patterns of outbreaks duration at the southern edge were highly variable according to the predictor data set and correlative method used. We also demonstrated that variation in forcing scenarios contributed only slightly to the uncertainty of model projections compared with the two sources of model‐specification uncertainty. Our approach helped to quantify model‐specification uncertainty in future forest pest outbreak characteristics. It may contribute to sounder decision‐making by acknowledging the limits of the projections and help to identify areas where model‐specification uncertainty is high. As such, we further stress that this uncertainty should be strongly considered when making forest management plans, notably by adopting adaptive management strategies so as to reduce future risks.  相似文献   

17.
18.

Background

The dynamics of biochemical networks can be modelled by systems of ordinary differential equations. However, these networks are typically large and contain many parameters. Therefore model reduction procedures, such as lumping, sensitivity analysis and time-scale separation, are used to simplify models. Although there are many different model reduction procedures, the evaluation of reduced models is difficult and depends on the parameter values of the full model. There is a lack of a criteria for evaluating reduced models when the model parameters are uncertain.

Results

We developed a method to compare reduced models and select the model that results in similar dynamics and uncertainty as the original model. We simulated different parameter sets from the assumed parameter distributions. Then, we compared all reduced models for all parameter sets using cluster analysis. The clusters revealed which of the reduced models that were similar to the original model in dynamics and variability. This allowed us to select the smallest reduced model that best approximated the full model. Through examples we showed that when parameter uncertainty was large, the model should be reduced further and when parameter uncertainty was small, models should not be reduced much.

Conclusions

A method to compare different models under parameter uncertainty is developed. It can be applied to any model reduction method. We also showed that the amount of parameter uncertainty influences the choice of reduced models.
  相似文献   

19.
High throughput technologies, such as gene expression arrays and protein mass spectrometry, allow one to simultaneously evaluate thousands of potential biomarkers that could distinguish different tissue types. Of particular interest here is distinguishing between cancerous and normal organ tissues. We consider statistical methods to rank genes (or proteins) in regards to differential expression between tissues. Various statistical measures are considered, and we argue that two measures related to the Receiver Operating Characteristic Curve are particularly suitable for this purpose. We also propose that sampling variability in the gene rankings be quantified, and suggest using the "selection probability function," the probability distribution of rankings for each gene. This is estimated via the bootstrap. A real dataset, derived from gene expression arrays of 23 normal and 30 ovarian cancer tissues, is analyzed. Simulation studies are also used to assess the relative performance of different statistical gene ranking measures and our quantification of sampling variability. Our approach leads naturally to a procedure for sample-size calculations, appropriate for exploratory studies that seek to identify differentially expressed genes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号