首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Bayesian inference in ecology   总被引:14,自引:1,他引:13  
Bayesian inference is an important statistical tool that is increasingly being used by ecologists. In a Bayesian analysis, information available before a study is conducted is summarized in a quantitative model or hypothesis: the prior probability distribution. Bayes’ Theorem uses the prior probability distribution and the likelihood of the data to generate a posterior probability distribution. Posterior probability distributions are an epistemological alternative to P‐values and provide a direct measure of the degree of belief that can be placed on models, hypotheses, or parameter estimates. Moreover, Bayesian information‐theoretic methods provide robust measures of the probability of alternative models, and multiple models can be averaged into a single model that reflects uncertainty in model construction and selection. These methods are demonstrated through a simple worked example. Ecologists are using Bayesian inference in studies that range from predicting single‐species population dynamics to understanding ecosystem processes. Not all ecologists, however, appreciate the philosophical underpinnings of Bayesian inference. In particular, Bayesians and frequentists differ in their definition of probability and in their treatment of model parameters as random variables or estimates of true values. These assumptions must be addressed explicitly before deciding whether or not to use Bayesian methods to analyse ecological data.  相似文献   

3.
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low‐quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision‐making framework will result in better‐informed, more robust decisions.  相似文献   

4.
Using models to simulate and analyze biological networks requires principled approaches to parameter estimation and model discrimination. We use Bayesian and Monte Carlo methods to recover the full probability distributions of free parameters (initial protein concentrations and rate constants) for mass‐action models of receptor‐mediated cell death. The width of the individual parameter distributions is largely determined by non‐identifiability but covariation among parameters, even those that are poorly determined, encodes essential information. Knowledge of joint parameter distributions makes it possible to compute the uncertainty of model‐based predictions whereas ignoring it (e.g., by treating parameters as a simple list of values and variances) yields nonsensical predictions. Computing the Bayes factor from joint distributions yields the odds ratio (~20‐fold) for competing ‘direct’ and ‘indirect’ apoptosis models having different numbers of parameters. Our results illustrate how Bayesian approaches to model calibration and discrimination combined with single‐cell data represent a generally useful and rigorous approach to discriminate between competing hypotheses in the face of parametric and topological uncertainty.  相似文献   

5.
Goal, Scope and Background Decision-makers demand information about the range of possible outcomes of their actions. Therefore, for developing Life Cycle Assessment (LCA) as a decision-making tool, Life Cycle Inventory (LCI) databases should provide uncertainty information. Approaches for incorporating uncertainty should be selected properly contingent upon the characteristics of the LCI database. For example, in industry-based LCI databases where large amounts of up-to-date process data are collected, statistical methods might be useful for quantifying the uncertainties. However, in practice, there is still a lack of knowledge as to what statistical methods are most effective for obtaining the required parameters. Another concern from the industry's perspective is the confidentiality of the process data. The aim of this paper is to propose a procedure for incorporating uncertainty information with statistical methods in industry-based LCI databases, which at the same time preserves the confidentiality of individual data. Methods The proposed procedure for taking uncertainty in industry-based databases into account has two components: continuous probability distributions fitted to scattering unit process data, and rank order correlation coefficients between inventory flows. The type of probability distribution is selected using statistical methods such as goodness-of-fit statistics or experience based approaches. Parameters of probability distributions are estimated using maximum likelihood estimation. Rank order correlation coefficients are calculated for inventory items in order to preserve data interdependencies. Such probability distributions and rank order correlation coefficients may be used in Monte Carlo simulations in order to quantify uncertainties in LCA results as probability distribution. Results and Discussion A case study is performed on the technology selection of polyethylene terephthalate (PET) chemical recycling systems. Three processes are evaluated based on CO2 reduction compared to the conventional incineration technology. To illustrate the application of the proposed procedure, assumptions were made about the uncertainty of LCI flows. The application of the probability distributions and the rank order correlation coefficient is shown, and a sensitivity analysis is performed. A potential use of the results of the hypothetical case study is discussed. Conclusion and Outlook The case study illustrates how the uncertainty information in LCI databases may be used in LCA. Since the actual scattering unit process data were not available for the case study, the uncertainty distribution of the LCA result is hypothetical. However, the merit of adopting the proposed procedure has been illustrated: more informed decision-making becomes possible, basing the decisions on the significance of the LCA results. With this illustration, the authors hope to encourage both database developers and data suppliers to incorporate uncertainty information in LCI databases.  相似文献   

6.
We use bootstrap simulation to characterize uncertainty in parametric distributions, including Normal, Lognormal, Gamma, Weibull, and Beta, commonly used to represent variability in probabilistic assessments. Bootstrap simulation enables one to estimate sampling distributions for sample statistics, such as distribution parameters, even when analytical solutions are not available. Using a two-dimensional framework for both uncertainty and variability, uncertainties in cumulative distribution functions were simulated. The mathematical properties of uncertain frequency distributions were evaluated in a series of case studies during which the parameters of each type of distribution were varied for sample sizes of 5, 10, and 20. For positively skewed distributions such as Lognormal, Weibull, and Gamma, the range of uncertainty is widest at the upper tail of the distribution. For symmetric unbounded distributions, such as Normal, the uncertainties are widest at both tails of the distribution. For bounded distributions, such as Beta, the uncertainties are typically widest in the central portions of the distribution. Bootstrap simulation enables complex dependencies between sampling distributions to be captured. The effects of uncertainty, variability, and parameter dependencies were studied for several generic functional forms of models, including models in which two-dimensional random variables are added, multiplied, and divided, to show the sensitivity of model results to different assumptions regarding model input distributions, ranges of variability, and ranges of uncertainty and to show the types of errors that may be obtained from mis-specification of parameter dependence. A total of 1,098 case studies were simulated. In some cases, counter-intuitive results were obtained. For example, the point value of the 95th percentile of uncertainty for the 95th percentile of variability of the product of four Gamma or Weibull distributions decreases as the coefficient of variation of each model input increases and, therefore, may not provide a conservative estimate. Failure to properly characterize parameter uncertainties and their dependencies can lead to orders-of-magnitude mis-estimates of both variability and uncertainty. In many cases, the numerical stability of two-dimensional simulation results was found to decrease as the coefficient of variation of the inputs increases. We discuss the strengths and limitations of bootstrap simulation as a method for quantifying uncertainty due to random sampling error.  相似文献   

7.

Background

The dynamics of biochemical networks can be modelled by systems of ordinary differential equations. However, these networks are typically large and contain many parameters. Therefore model reduction procedures, such as lumping, sensitivity analysis and time-scale separation, are used to simplify models. Although there are many different model reduction procedures, the evaluation of reduced models is difficult and depends on the parameter values of the full model. There is a lack of a criteria for evaluating reduced models when the model parameters are uncertain.

Results

We developed a method to compare reduced models and select the model that results in similar dynamics and uncertainty as the original model. We simulated different parameter sets from the assumed parameter distributions. Then, we compared all reduced models for all parameter sets using cluster analysis. The clusters revealed which of the reduced models that were similar to the original model in dynamics and variability. This allowed us to select the smallest reduced model that best approximated the full model. Through examples we showed that when parameter uncertainty was large, the model should be reduced further and when parameter uncertainty was small, models should not be reduced much.

Conclusions

A method to compare different models under parameter uncertainty is developed. It can be applied to any model reduction method. We also showed that the amount of parameter uncertainty influences the choice of reduced models.
  相似文献   

8.
ABSTRACT: BACKGROUND: A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. RESULTS: A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. CONCLUSIONS: The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.  相似文献   

9.
Analytica is an easy-to-learn, easy-to-use modeling tool that allows modelers to represent what they know through influence diagrams. These diagrams show which model quantities are derived from which others and indicate by shape and color the roles that different nodes play in the model, e.g., decision variables, chance variables, outcome variables, deterministic functions, or abstractions of sub-models. A wide variety of built-in probability distributions allow uncertainties about input values to be painlessly specified and propagated through the model via a fast, professional Monte-Carlo simulation engine. Resulting uncertainties and sensitivities about any quantity in the model can be viewed with admirable ease and flexibility by selecting among probability density, cumulative distribution, confidence band, sensitivity analysis, and other displays. Analytica features clever hierarchical model management and navigation features that serious model-builders will appreciate and that novice modelers will learn from as they are led to develop well-structured, well-documented models. Simple continuous (compartmental-flow) and Markov chain dynamic simulation models can be built by paying some detailed attention to arrays and indices, although Analytica does not support true discrete-event simulation. Within its chosen domain—uncertainty propagation through influence diagram models—Analytica is by far the easiest and best tool that we have seen.  相似文献   

10.
Forecasts of species range shifts under climate change are fraught with uncertainties and ensemble forecasting may provide a framework to deal with such uncertainties. Here, a novel approach to partition the variance among modeled attributes, such as richness or turnover, and map sources of uncertainty in ensembles of forecasts is presented. We model the distributions of 3837 New World birds and project them into 2080. We then quantify and map the relative contribution of different sources of uncertainty from alternative methods for niche modeling, general circulation models (AOGCM), and emission scenarios. The greatest source of uncertainty in forecasts of species range shifts arises from using alternative methods for niche modeling, followed by AOGCM, and their interaction. Our results concur with previous studies that discovered that projections from alternative models can be extremely varied, but we provide a new analytical framework to examine uncertainties in models by quantifying their importance and mapping their patterns.  相似文献   

11.
In a recent epidemiological study, Bayesian uncertainties on lung doses have been calculated to determine lung cancer risk from occupational exposures to plutonium. These calculations used a revised version of the Human Respiratory Tract Model (HRTM) published by the ICRP. In addition to the Bayesian analyses, which give probability distributions of doses, point estimates of doses (single estimates without uncertainty) were also provided for that study using the existing HRTM as it is described in ICRP Publication 66; these are to be used in a preliminary analysis of risk. To infer the differences between the point estimates and Bayesian uncertainty analyses, this paper applies the methodology to former workers of the United Kingdom Atomic Energy Authority (UKAEA), who constituted a subset of the study cohort. The resulting probability distributions of lung doses are compared with the point estimates obtained for each worker. It is shown that mean posterior lung doses are around two- to fourfold higher than point estimates and that uncertainties on doses vary over a wide range, greater than two orders of magnitude for some lung tissues. In addition, we demonstrate that uncertainties on the parameter values, rather than the model structure, are largely responsible for these effects. Of these it appears to be the parameters describing absorption from the lungs to blood that have the greatest impact on estimates of lung doses from urine bioassay. Therefore, accurate determination of the chemical form of inhaled plutonium and the absorption parameter values for these materials is important for obtaining reliable estimates of lung doses and hence risk from occupational exposures to plutonium.  相似文献   

12.
Bayesian inference allows the transparent communication and systematic updating of model uncertainty as new data become available. When applied to material flow analysis (MFA), however, Bayesian inference is undermined by the difficulty of defining proper priors for the MFA parameters and quantifying the noise in the collected data. We start to address these issues by first deriving and implementing an expert elicitation procedure suitable for generating MFA parameter priors. Second, we propose to learn the data noise concurrent with the parametric uncertainty. These methods are demonstrated using a case study on the 2012 US steel flow. Eight experts are interviewed to elicit distributions on steel flow uncertainty from raw materials to intermediate goods. The experts' distributions are combined and weighted according to the expertise demonstrated in response to seeding questions. These aggregated distributions form our model parameters' informative priors. Sensible, weakly informative priors are adopted for learning the data noise. Bayesian inference is then performed to update the parametric and data noise uncertainty given MFA data collected from the United States Geological Survey and the World Steel Association. The results show a reduction in MFA parametric uncertainty when incorporating the collected data. Only a modest reduction in data noise uncertainty was observed using 2012 data; however, greater reductions were achieved when using data from multiple years in the inference. These methods generate transparent MFA and data noise uncertainties learned from data rather than pre-assumed data noise levels, providing a more robust basis for decision-making that affects the system.  相似文献   

13.
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple‐ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple‐ensemble probabilistic assessment, the median of simulated yield change was ?4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981–2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple‐ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources.  相似文献   

14.
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information.  相似文献   

15.
Several stochastic models of character change, when implemented in a maximum likelihood framework, are known to give a correspondence between the maximum parsimony method and the method of maximum likelihood. One such model has an independently estimated branch-length parameter for each site and each branch of the phylogenetic tree. This model--the no-common-mechanism model--has many parameters, and, in fact, the number of parameters increases as fast as the alignment is extended. We take a Bayesian approach to the no-common-mechanism model and place independent gamma prior probability distributions on the branch-length parameters. We are able to analytically integrate over the branch lengths, and this allowed us to implement an efficient Markov chain Monte Carlo method for exploring the space of phylogenetic trees. We were able to reliably estimate the posterior probabilities of clades for phylogenetic trees of up to 500 sequences. However, the Bayesian approach to the problem, at least as implemented here with an independent prior on the length of each branch, does not tame the behavior of the branch-length parameters. The integrated likelihood appears to be a simple rescaling of the parsimony score for a tree, and the marginal posterior probability distribution of the length of a branch is dependent upon how the maximum parsimony method reconstructs the characters at the interior nodes of the tree. The method we describe, however, is of potential importance in the analysis of morphological character data and also for improving the behavior of Markov chain Monte Carlo methods implemented for models in which sites share a common branch-length parameter.  相似文献   

16.
Physicochemical models of signaling pathways are characterized by high levels of structural and parametric uncertainty, reflecting both incomplete knowledge about signal transduction and the intrinsic variability of cellular processes. As a result, these models try to predict the dynamics of systems with tens or even hundreds of free parameters. At this level of uncertainty, model analysis should emphasize statistics of systems-level properties, rather than the detailed structure of solutions or boundaries separating different dynamic regimes. Based on the combination of random parameter search and continuation algorithms, we developed a methodology for the statistical analysis of mechanistic signaling models. In applying it to the well-studied MAPK cascade model, we discovered a large region of oscillations and explained their emergence from single-stage bistability. The surprising abundance of strongly nonlinear (oscillatory and bistable) input/output maps revealed by our analysis may be one of the reasons why the MAPK cascade in vivo is embedded in more complex regulatory structures. We argue that this type of analysis should accompany nonlinear multiparameter studies of stationary as well as transient features in network dynamics.  相似文献   

17.
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.  相似文献   

18.
Markovian models of ion channels have proven useful in the reconstruction of experimental data and prediction of cellular electrophysiology. We present the stochastic Galerkin method as an alternative to Monte Carlo and other stochastic methods for assessing the impact of uncertain rate coefficients on the predictions of Markovian ion channel models. We extend and study two different ion channel models: a simple model with only a single open and a closed state and a detailed model of the cardiac rapidly activating delayed rectifier potassium current. We demonstrate the efficacy of stochastic Galerkin methods for computing solutions to systems with random model parameters. Our studies illustrate the characteristic changes in distributions of state transitions and electrical currents through ion channels due to random rate coefficients. Furthermore, the studies indicate the applicability of the stochastic Galerkin technique for uncertainty and sensitivity analysis of bio-mathematical models.  相似文献   

19.
Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur.  相似文献   

20.
Purpose

Objective uncertainty quantification (UQ) of a product life-cycle assessment (LCA) is a critical step for decision-making. Environmental impacts can be measured directly or by using models. Underlying mathematical functions describe a model that approximate the environmental impacts during various LCA stages. In this study, three possible uncertainty sources of a mathematical model, i.e., input variability, model parameter (differentiate from input in this study), and model-form uncertainties, were investigated. A simple and easy to implement method is proposed to quantify each source.

Methods

Various data analytics methods were used to conduct a thorough model uncertainty analysis; (1) Interval analysis was used for input uncertainty quantification. A direct sampling using Monte Carlo (MC) simulation was used for interval analysis, and results were compared to that of indirect nonlinear optimization as an alternative approach. A machine learning surrogate model was developed to perform direct MC sampling as well as indirect nonlinear optimization. (2) A Bayesian inference was adopted to quantify parameter uncertainty. (3) A recently introduced model correction method based on orthogonal polynomial basis functions was used to evaluate the model-form uncertainty. The methods are applied to a pavement LCA to propagate uncertainties throughout an energy and global warming potential (GWP) estimation model; a case of a pavement section in Chicago metropolitan area was used.

Results and discussion

Results indicate that each uncertainty source contributes to the overall energy and GWP output of the LCA. Input uncertainty was shown to have significant impact on overall GWP output; for the example case study, GWP interval was around 50%. Parameter uncertainty results showed that an assumption of ±?10% uniform variation in the model parameter priors resulted in 28% variation in the GWP output. Model-form uncertainty had the lowest impact (less than 10% variation in the GWP). This is because the original energy model is relatively accurate in estimating the energy. However, sensitivity of the model-form uncertainty showed that even up to 180% variation in the results can be achieved due to lower original model accuracies.

Conclusions

Investigating each uncertainty source of the model indicated the importance of the accurate characterization, propagation, and quantification of uncertainty. The outcome of this study proposed independent and relatively easy to implement methods that provide robust grounds for objective model uncertainty analysis for LCA applications. Assumptions on inputs, parameter distributions, and model form need to be justified. Input uncertainty plays a key role in overall pavement LCA output. The proposed model correction method as well as interval analysis were relatively easy to implement. Research is still needed to develop a more generic and simplified MCMC simulation procedure that is fast to implement.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号