首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
傅煜  雷渊才  曾伟生 《生态学报》2015,35(23):7738-7747
采用系统抽样体系江西省固定样地杉木连续观测数据和生物量数据,通过Monte Carlo法反复模拟由单木生物量模型推算区域尺度地上生物量的过程,估计了江西省杉木地上总生物量。基于不同水平建模样本量n及不同决定系数R~2的设计,分别研究了单木生物量模型参数变异性及模型残差变异性对区域尺度生物量估计不确定性的影响。研究结果表明:2009年江西省杉木地上生物量估计值为(19.84±1.27)t/hm~2,不确定性占生物量估计值约6.41%。生物量估计值和不确定性值达到平稳状态所需的运算时间随建模样本量及决定系数R~2的增大而减小;相对于模型参数变异性,残差变异性对不确定性的影响更小。  相似文献   

2.
森林生物量估算中模型不确定性分析   总被引:3,自引:1,他引:2  
秦立厚  张茂震  钟世红  于晓辉 《生态学报》2017,37(23):7912-7919
单木生物量估算是区域森林生物量估算的基础。量化单木生物量模型中各种不确定性来源,分析各不确定性来源对森林生物量估算的影响,可为提高森林生物量估算精度提供理论依据。基于52株杉木地上部分生物量实测数据,建立杉木单木地上部分生物量一元与二元模型。在两种模型形式下,根据临安市2009年森林资源连续清查数据中杉木实测数据,分析单木生物量模型中所包含的2种不确定性,即模型参数不确定性和模型残差变异引起的不确定性。最后利用误差传播定律计算单木生物量模型总不确定性。结果表明,基于一元生物量模型的临安市杉木生物量估计均值为6.94 Mg/hm~2,由一元模型残差变异引起的生物量不确定性约为11.1%,模型参数误差引起的生物量不确定性约为14.4%,一元生物量模型估算合成不确定性为18.18%。基于二元生物量模型的临安市杉木生物量估计均值为7.71 Mg/hm~2,模型残差变异引起的不确定性约为7.0%,模型参数误差引起的不确定性约为8.53%,二元生物量模型估算合成不确定性为11.03%。研究表明模型参数不确定性随建模样本的增加逐渐降低,当建模样本由30增加到40再增加到52时,一元生物量模型模型参数不确定性分别为20.26%、16.19%、14.4%,二元生物量模型分别为13.09%、9.4%、8.53%。此外,建模样本的增加对残差变异不确定性也有一定影响,当建模样本由30增加到42再增加到48时,一元模型残差变异不确定性分别为15.2%,12.3%和11.7%;二元模型残差变异不确定性分别为13.3%,9.4%和8.3%。在2种不确定性来源中模型参数不确定性对估计结果影响最大,其次为模型残差变异。由于模型残差变异、参数不确定性与建模样本有关,因此可以通过增加建模样本来减小模型参数不确定性。二元生物量模型总的不确定性要低于一元生物量模型。  相似文献   

3.
A potato crop multimodel assessment was conducted to quantify variation among models and evaluate responses to climate change. Nine modeling groups simulated agronomic and climatic responses at low‐input (Chinoli, Bolivia and Gisozi, Burundi)‐ and high‐input (Jyndevad, Denmark and Washington, United States) management sites. Two calibration stages were explored, partial (P1), where experimental dry matter data were not provided, and full (P2). The median model ensemble response outperformed any single model in terms of replicating observed yield across all locations. Uncertainty in simulated yield decreased from 38% to 20% between P1 and P2. Model uncertainty increased with interannual variability, and predictions for all agronomic variables were significantly different from one model to another (P < 0.001). Uncertainty averaged 15% higher for low‐ vs. high‐input sites, with larger differences observed for evapotranspiration (ET), nitrogen uptake, and water use efficiency as compared to dry matter. A minimum of five partial, or three full, calibrated models was required for an ensemble approach to keep variability below that of common field variation. Model variation was not influenced by change in carbon dioxide (C), but increased as much as 41% and 23% for yield and ET, respectively, as temperature (T) or rainfall (W) moved away from historical levels. Increases in T accounted for the highest amount of uncertainty, suggesting that methods and parameters for T sensitivity represent a considerable unknown among models. Using median model ensemble values, yield increased on average 6% per 100‐ppm C, declined 4.6% per °C, and declined 2% for every 10% decrease in rainfall (for nonirrigated sites). Differences in predictions due to model representation of light utilization were significant (P < 0.01). These are the first reported results quantifying uncertainty for tuber/root crops and suggest modeling assessments of climate change impact on potato may be improved using an ensemble approach.  相似文献   

4.
When examining environmental levels of organic contaminants, much of our focus has been placed on fish due to their greater potential to bioaccumulate and their direct linkage with human as a staple of their diet. Contaminant levels in Great Lakes fish communities have been closely monitored over the last few decades, and the resulting information has been indispensable in guiding consumption advisories. In this study, we first conducted an analysis of temporal trends of three organochlorines (hexachlorobenzene, octachlorostyrene, and α-hexachlorocyclohexane) in five Lake Erie fish species using dynamic linear modeling, while explicitly considering fish length and lipid content as covariates. Our results indicate that the levels of the three compounds have been steadily decreasing during the late 1970s to 2007, although there were instances in which the fish organochlorine contents exhibited fluctuations through time. The second part of our analysis focused on the development of a Bayesian framework to update fish consumption advisories. We present a methodology that incorporates the uncertainty in contaminant predictions and the natural variability in fish length and lipid content, while remaining flexible for different human weights and diet patterns. We then illustrate our Bayesian framework for two important contaminants in the Great Lakes region, mercury and PCBs. We established thresholds for each contaminant based on the tolerable daily intake (TDI) values and made predictive statements about the probability of exceedance of these critical levels. Our study also discusses how failure to account for model uncertainty/error can have profound implications for the credibility of the predictive risk assessment statements derived. The proposed Bayesian approach to fish consumption advisories can serve as a valuable framework for year-specific, highly customizable risk assessment statements that also account for the inherent variability in natural systems.  相似文献   

5.
Quantification of uncertainty associated with risk estimates is an important part of risk assessment. In recent years, use of second-order distributions, and two-dimensional simulations have been suggested for quantifying both variability and uncertainty. These approaches are better interpreted within the Bayesian framework. To help practitioners better use such methods and interpret the results, in this article, we describe propagation and interpretation of uncertainty in the Bayesian paradigm. We consider both the estimation problem where some summary measures of the risk distribution (e.g., mean, variance, or selected percentiles) are to be estimated, and the prediction problem, where the risk values for some specific individuals are to be predicted. We discuss some connections and differences between uncertainties in estimation and prediction problems, and present an interpretation of a decomposition of total variability/uncertainty into variability and uncertainty in terms of expected squared error of prediction and its reduction from perfect information. We also discuss the role of Monte Carlo methods in characterizing uncertainty. We explain the basic ideas using a simple example, and demonstrate Monte Carlo calculations using another example from the literature.  相似文献   

6.
Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause‐specific mortality provide an example of implicit use of expert knowledge when causes‐of‐death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause‐specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause‐of‐death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event‐time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause‐of‐death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause‐of‐death assignment in modeling of cause‐specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause‐specific survival data for white‐tailed deer, and compared results. We demonstrate that model selection results changed between the two approaches, and incorporating observer knowledge in cause‐of‐death increased the variability associated with parameter estimates when compared to the traditional approach. These differences between the two approaches can impact reported results, and therefore, it is critical to explicitly incorporate expert knowledge in statistical methods to ensure rigorous inference.  相似文献   

7.
Bioclimatic models are the primary tools for simulating the impact of climate change on species distributions. Part of the uncertainty in the output of these models results from uncertainty in projections of future climates. To account for this, studies often simulate species responses to climates predicted by more than one climate model and/or emission scenario. One area of uncertainty, however, has remained unexplored: internal climate model variability. By running a single climate model multiple times, but each time perturbing the initial state of the model slightly, different but equally valid realizations of climate will be produced. In this paper, we identify how ongoing improvements in climate models can be used to provide guidance for impacts studies. In doing so we provide the first assessment of the extent to which this internal climate model variability generates uncertainty in projections of future species distributions, compared with variability between climate models. We obtained data on 13 realizations from three climate models (three from CSIRO Mark2 v3.0, four from GISS AOM, and six from MIROC v3.2) for two time periods: current (1985–1995) and future (2025–2035). Initially, we compared the simulated values for each climate variable (P, Tmax, Tmin, and Tmean) for the current period to observed climate data. This showed that climates simulated by realizations from the same climate model were more similar to each other than to realizations from other models. However, when projected into the future, these realizations followed different trajectories and the values of climate variables differed considerably within and among climate models. These had pronounced effects on the projected distributions of nine Australian butterfly species when modelled using the BIOCLIM component of DIVA-GIS. Our results show that internal climate model variability can lead to substantial differences in the extent to which the future distributions of species are projected to change. These can be greater than differences resulting from between-climate model variability. Further, different conclusions regarding the vulnerability of species to climate change can be reached due to internal model variability. Clearly, several climate models, each represented by multiple realizations, are required if we are to adequately capture the range of uncertainty associated with projecting species distributions in the future.  相似文献   

8.
ABSTRACT: BACKGROUND: A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. RESULTS: A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. CONCLUSIONS: The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.  相似文献   

9.
The selection of the most appropriate model for an ecological risk assessment depends on the application, the data and resources available, the knowledge base of the assessor, the relevant endpoints, and the extent to which the model deals with uncertainty. Since ecological systems are highly variable and our knowledge of model input parameters is uncertain, it is important that models include treatments of uncertainty and variability, and that results are reported in this light. In this paper we discuss treatments of variation and uncertainty in a variety of population models. In ecological risk assessments, the risk relates to the probability of an adverse event in the context of environmental variation. Uncertainty relates to ignorance about parameter values, e.g., measurement error and systematic error. An assessment of the full distribution of risks, under variability and parameter uncertainty, will give the most comprehensive and flexible endpoint. In this paper we present the rationale behind probabilistic risk assessment, identify the sources of uncertainty relevant for risk assessment and provide an overview of a range of population models. While all of the models reviewed have some utility in ecology, some have more comprehensive treatments of uncertainty than others. We identify the models that allow probabilistic assessments and sensitivity analyses, and we offer recommendations for further developments that aim towards more comprehensive and reliable ecological risk assessments for populations.  相似文献   

10.
The combined effect of mercury (HgCl2) and high temperature on the growth and synthesis of nucleic acid and protein, and on the cell cycle of HeLa S3 cells was investigated. The subsequent growth of the cells was dose-dependently inhibited by mercury at 37.2° and 41.2°C. The inhibitory effect of mercury on subsequent growth was enhanced at the higher temperature. IC50 values for DNA and RNA synthesis but not protein synthesis, at 41.2°C, were significantly lower than those at 37.2°C (P<0.05,P<0.01, respectively). Flow cytometric analysis using synchronous cells indicated the possibility of blocking of cell cycle progression in the early part of S phase by the combined treatment. These results suggest that the cytotoxicity of mercury to cell growth was enhanced at the higher temperature and that this enhancement is related to the increased inhibitory effect of mercury on DNA and RNA synthesis and on the cell cycle at high temperatures.  相似文献   

11.
Summary In this article, we propose a Bayesian approach to dose–response assessment and the assessment of synergy between two combined agents. We consider the case of an in vitro ovarian cancer research study aimed at investigating the antiproliferative activities of four agents, alone and paired, in two human ovarian cancer cell lines. In this article, independent dose–response experiments were repeated three times. Each experiment included replicates at investigated dose levels including control (no drug). We have developed a Bayesian hierarchical nonlinear regression model that accounts for variability between experiments, variability within experiments (i.e., replicates), and variability in the observed responses of the controls. We use Markov chain Monte Carlo to fit the model to the data and carry out posterior inference on quantities of interest (e.g., median inhibitory concentration IC 50 ). In addition, we have developed a method, based on Loewe additivity, that allows one to assess the presence of synergy with honest accounting of uncertainty. Extensive simulation studies show that our proposed approach is more reliable in declaring synergy compared to current standard analyses such as the median‐effect principle/combination index method ( Chou and Talalay, 1984 , Advances in Enzyme Regulation 22, 27–55), which ignore important sources of variability and uncertainty.  相似文献   

12.
Background, aim, and scope  Analysis of uncertainties plays a vital role in the interpretation of life cycle assessment findings. Some of these uncertainties arise from parametric data variability in life cycle inventory analysis. For instance, the efficiencies of manufacturing processes may vary among different industrial sites or geographic regions; or, in the case of new and unproven technologies, it is possible that prospective performance levels can only be estimated. Although such data variability is usually treated using a probabilistic framework, some recent work on the use of fuzzy sets or possibility theory has appeared in the literature. The latter school of thought is based on the notion that not all data variability can be properly described in terms of frequency of occurrence. In many cases, it is necessary to model the uncertainty associated with the subjective degree of plausibility of parameter values. Fuzzy set theory is appropriate for such uncertainties. However, the computations required for handling fuzzy quantities has not been fully integrated with the formal matrix-based life cycle inventory analysis (LCI) described by Heijungs and Suh (2002). Materials and methods  This paper integrates computations with fuzzy numbers into the matrix-based LCI computational model described in the literature. The approach uses fuzzy numbers to propagate the data variability in LCI calculations, and results in fuzzy distributions of the inventory results. The approach is developed based on similarities with the fuzzy economic input–output (EIO) model proposed by Buckley (Eur J Oper Res 39:54–60, 1989). Results  The matrix-based fuzzy LCI model is illustrated using three simple case studies. The first case shows how fuzzy inventory results arise in simple systems with variability in industrial efficiency and emissions data. The second case study illustrates how the model applies for life cycle systems with co-products, and thus requires the inclusion of displaced processes. The third case study demonstrates the use of the method in the context of comparing different carbon sequestration technologies. Discussion  These simple case studies illustrate the important features of the model, including possible computational issues that can arise with larger and more complex life cycle systems. Conclusions  A fuzzy matrix-based LCI model has been proposed. The model extends the conventional matrix-based LCI model to allow for computations with parametric data variability represented as fuzzy numbers. This approach is an alternative or complementary approach to interval analysis, probabilistic or Monte Carlo techniques. Recommendations and perspectives  Potential further work in this area includes extension of the fuzzy model to EIO-LCA models and to life cycle impact assessment (LCIA); development of hybrid fuzzy-probabilistic approaches; and integration with life cycle-based optimization or decision analysis. Additional theoretical work is needed for modeling correlations of the variability of parameters using interacting or correlated fuzzy numbers, which remains an unresolved computational issue. Furthermore, integration of the fuzzy model into LCA software can also be investigated.  相似文献   

13.
Data quality     
A methodology is presented that enables incorporating expert judgment regarding the variability of input data for environmental life cycle assessment (LCA) modeling. The quality of input data in the life-cycle inventory (LCI) phase is evaluated by LCA practitioners using data quality indicators developed for this application. These indicators are incorporated into the traditional LCA inventory models that produce non-varying point estimate results (i.e., deterministic models) to develop LCA inventory models that produce results in the form of random variables that can be characterized by probability distributions (i.e., stochastic models). The outputs of these probabilistic LCA models are analyzed using classical statistical methods for better decision and policy making information. This methodology is applied to real-world beverage delivery system LCA inventory models. The inventory study results for five beverage delivery system alternatives are compared using statistical methods that account for the variance in the model output values for each alternative. Sensitivity analyses are also performed that indicate model output value variance increases as input data uncertainty increases (i.e., input data quality degrades). Concluding remarks point out the strengths of this approach as an alternative to providing the traditional qualitative assessment of LCA inventory study input data with no efficient means of examining the combined effects on the model results. Data quality assessments can now be captured quantitatively within the LCA inventory model structure. The approach produces inventory study results that are variables reflecting the uncertainty associated with the input data. These results can be analyzed using statistical methods that make efficient quantitative comparisons of inventory study alternatives possible. Recommendations for future research are also provided that include the screening of LCA inventory model inputs for significance and the application of selection and ranking techniques to the model outputs.  相似文献   

14.
To estimate fossil fuel demand and greenhouse gas emissions associated with short-rotation willow (Salix spp.) crops in New York State, we constructed a life cycle assessment model capable of estimating point values and measures of variability for a number of key processes across eight management scenarios. The system used 445.0 to 1,052.4 MJ of fossil energy per oven-dry tonne (odt) of delivered willow biomass, resulting in a net energy balance of 18.3:1 to 43.4:1. The largest fraction of the energy demand across all scenarios was driven by the use of diesel fuels. The largest proportion of diesel fuel was associated with harvesting and delivery of willow chips seven times on 3-year rotations over the life of the crop. Similar patterns were found for greenhouse gas emissions across all scenarios, as fossil fuel use served as the biggest source of emissions in the system. Carbon sequestration in the belowground portion of the willow system provided a large carbon sink that more than compensated for carbon emissions across all scenarios, resulting in final greenhouse gas balances of ?138.4 to ?52.9 kg CO2 eq. per odt biomass. The subsequent uncertainty analyses revealed that variability associated with data on willow yield, litterfall, and belowground biomass eliminated some of the differences between the tested scenarios. Even with the inclusion of uncertainty analysis, the willow system was still a carbon sequestration system after a single crop cycle (seven 3-year rotations) in all eight scenarios. A better understanding and quantification of factors that drive the variability in the biological portions of the system is necessary to produce more precise estimates of the emissions and energy performance of short-rotation woody crops.  相似文献   

15.
Drugs, sex and HIV: a mathematical model for New York City.   总被引:5,自引:0,他引:5  
A data-based mathematical model was formulated to assess the epidemiological consequences of heterosexual, intravenous drug use (IVDU) and perinatal transmission in New York City (NYC). The model was analysed to clarify the relationship between heterosexual and IVDU transmission and to provide qualitative and quantitative insights into the HIV epidemic in NYC. The results demonstrated the significance of the dynamic interaction of heterosexual and IVDU transmission. Scenario analysis of the model was used to suggest a new explanation for the stabilization of the seroprevalence level that has been observed in the NYC IVDU community; the proposed explanation does not rely upon any IVDU or sexual behavioural changes. Gender-specific risks of heterosexual transmission in IVDUs were also explored by scenario analysis. The results showed that the effect of the heterosexual transmission risk factor on increasing the risk of HIV infection depends upon the level of IVDU. The model was used to predict future numbers of adult and pediatric AIDS cases; a sensitivity analysis of the model showed that the confidence intervals on these prediction estimates were extremely wide. This prediction variability was due to the uncertainty in estimating the values of the models' thirty variables (twenty biological-behavioural transmission parameters and the initial sizes of ten subgroups). However, the sensitivity analysis revealed that only a few key variables were significant in contributing to the AIDS case prediction variability; partial rank correlation coefficients were calculated and used to identify and to rank the importance of these key variables. The results suggest that long-term precise estimates of the future number of AIDS cases will only be possible once the values of these key variables have been evaluated accurately.  相似文献   

16.
Above forest canopies, eddy covariance (EC) measurements of mass (CO2, H2O vapor) and energy exchange, assumed to represent ecosystem fluxes, are commonly made at one point in the roughness sublayer (RSL). A spatial variability experiment, in which EC measurements were made from six towers within the RSL in a uniform pine plantation, quantified large and dynamic spatial variation in fluxes. The spatial coefficient of variation (CV) of the scalar fluxes decreased with increasing integration time, stabilizing at a minimum that was independent of further lengthening the averaging period (hereafter a ‘stable minimum’). For all three fluxes, the stable minimum (CV=9–11%) was reached at averaging times (τp) of 6–7 h during daytime, but higher stable minima (CV=46–158%) were reached at longer τp (>12 h) during nighttime. To the extent that decreasing CV of EC fluxes reflects reduction in micrometeorological sampling errors, half of the observed variability at τp=30 min is attributed to sampling errors. The remaining half (indicated by the stable minimum CV) is attributed to underlying variability in ecosystem structural properties, as determined by leaf area index, and perhaps associated ecosystem activity attributes. We further assessed the spatial variability estimates in the context of uncertainty in annual net ecosystem exchange (NEE). First, we adjusted annual NEE values obtained at our long‐term observation tower to account for the difference between this tower and the mean of all towers from this experiment; this increased NEE by up to 55 g C m?2 yr?1. Second, we combined uncertainty from gap filling and instrument error with uncertainty because of spatial variability, producing an estimate of variability in annual NEE ranging from 79 to 127 g C m?2 yr?1. This analysis demonstrated that even in such a uniform pine plantation, in some years spatial variability can contribute ~50% of the uncertainty in annual NEE estimates.  相似文献   

17.
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low‐quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision‐making framework will result in better‐informed, more robust decisions.  相似文献   

18.
Application of uncertainty and variability in LCA   总被引:1,自引:0,他引:1  
As yet, the application of an uncertainty and variability analysis is not common practice in LCAs. A proper analysis will be facilitated when it is clear which types of uncertainties and variabilities exist in LCAs and which tools are available to deal with them. Therefore, a framework is developed to classify types of uncertainty and variability in LCAs. Uncertainty is divided in (1) parameter uncertainty, (2) model uncertainty, and (3) uncertainty due to choices, while variability covers (4) spatial variability, (5) temporal variability, and (6) variability between objects and sources. A tool to deal with parameter uncertainty and variability between objects and sources in both the inventory and the impact assessment is probabilistic simulation. Uncertainty due to choices can be dealt with in a scenario analysis or reduced by standardisation and peer review. The feasibility of dealing with temporal and spatial variability is limited, implying model uncertainty in LCAs. Other model uncertainties can be reduced partly by more sophisticated modelling, such as the use of non-linear inventory models in the inventory and multi media models in the characterisation phase.  相似文献   

19.
Boron, which is ubiquitous in the environment, causes developmental and reproductive effects in experimental animals. This observation has led to efforts to establish a Tolerable Intake value for boron. Although risk assessors agree on the use of fetal weight decreases observed in rats as an appropriate critical effect, consensus on the adequacy of toxicokinetic data as a basis for replacement of default uncertainty factors remains to be reached. A critical analysis of the existing data on boron toxicokinetics was conducted to clarify the appropriateness of replacing default uncertainty factors (10-fold for interspecies differences and 10-fold for intraspecies differences) with data-derived values. The default uncertainty factor for variability in response from animals to humans of 10-fold (default values of 4-fold for kinetics and 2.5-fold for dynamics) was recommended, since clearance of boron is 3-to 4-fold higher in rats than in humans and data on dynamic differences—in order to modify the default value—are unavailable. A data-derived adjustment of 6-fold (1.8 for kinetics and 3.1 for dynamics) rather than the default uncertainty factor of 10-fold was considered appropriate for intrahuman variability, based on variability in glomerular filtration rate during pregnancy in humans and the lack of available data on dynamic differences. Additional studies to investigate the toxicokinetics of boron in rats would be useful to provide a stronger basis for replacement of default uncertainty factors for interspecies variation.  相似文献   

20.
Uncertainty calculation in life cycle assessments   总被引:1,自引:0,他引:1  
Goal and Background  Uncertainty is commonly not taken into account in LCA studies, which downgrades their usability for decision support. One often stated reason is a lack of method. The aim of this paper is to develop a method for calculating the uncertainty propagation in LCAs in a fast and reliable manner. Approach  The method is developed in a model that reflects the calculation of an LCA. For calculating the uncertainty, the model combines approximation formulas and Monte Carlo Simulation. It is based on virtual data that distinguishes true values and random errors or uncertainty, and that hence allows one to compare the performance of error propagation formulas and simulation results. The model is developed for a linear chain of processes, but extensions for covering also branched and looped product systems are made and described. Results  The paper proposes a combined use of approximation formulas and Monte Carlo simulation for calculating uncertainty in LCAs, developed primarily for the sequential approach. During the calculation, a parameter observation controls the performance of the approximation formulas. Quantitative threshold values are given in the paper. The combination thus transcends drawbacks of simulation and approximation. Conclusions and Outlook  The uncertainty question is a true jigsaw puzzle for LCAs and the method presented in this paper may serve as one piece in solving it. It may thus foster a sound use of uncertainty assessment in LCAs. Analysing a proper management of the input uncertainty, taking into account suitable sampling and estimation techniques; using the approach for real case studies, implementing it in LCA software for automatically applying the proposed combined uncertainty model and, on the other hand, investigating about how people do decide, and should decide, when their decision relies on explicitly uncertain LCA outcomes-these all are neighbouring puzzle pieces inviting to further work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号