首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Modelling groundwater depths in floodplains and peatlands remains a basic approach to assessing hydrological conditions of habitats. Groundwater flow models used to compute groundwater heads are known for their uncertainties, and the calibration of these models and the uncertainty assessments of parameters remain fundamental steps in providing reliable data. However, the elevation data used to determine the geometry of model domains are frequently considered deterministic and hence are seldom considered a source of uncertainty in model-based groundwater level estimations. Knowing that even the cutting-edge laser-scanning-based digital elevation models have errors due to vegetation effects and scanning procedure failures, we provide an assessment of uncertainty of water level estimations that remain basic data for wetland ecosystem assessment and management. We found that the uncertainty of the digital elevation model (DEM) significantly influenced the results of the assessment of the habitat’s hydrological conditions expressed as groundwater depths. In extreme cases, although the average habitat suitability index (HSI) assessed in a deterministic manner was defined as ‘unsuitable’, in a probabilistic approach (grid-cell-scale estimation), it reached a value of 40% probability, signifying ‘optimum’ or ‘tolerant’. For the 24 habitats analysed, we revealed vast differences between HSI scores calculated for individual grid cells of the model and HSI scores computed as average values from the set of grid cells located within the habitat patches. We conclude that groundwater-modelling-based decision support approaches to wetland assessment can result in incorrect management if the quality of DEM has not been addressed in studies referring to groundwater depths.  相似文献   

2.
Monitoring annual change and long-term trends in population structure and abundance of white-tailed deer (Odocoileus virginianus) is an important but challenging component of their management. Many monitoring programs consist of count-based indices of relative abundance along with a variety of population structure information. Analyzed separately these data can be difficult to interpret because of observation error in the data collection process, missing data, and the lack of an explicit biological model to connect the data streams while accounting for their relative imprecision. We used a Bayesian age-structured integrated population model to integrate data from a fall spotlight survey that produced a count-based index of relative abundance and a volunteer staff and citizen classification survey that generated a fall recruitment index. Both surveys took place from 2003–2018 in the parkland ecoregion of southeast Saskatchewan, Canada. Our approach modeled demographic processes for age-specific (0.5-, 1.5-, ≥2.5-year-old classes) populations and was fit to count and recruitment data via models that allowed for error in the respective observation processes. The Bayesian framework accommodated missing data and allowed aggregation of transects to act as samples from the larger management unit population. The approach provides managers with continuous time series of estimated relative abundance, recruitment rates, and apparent survival rates with full propagation of uncertainty and sharing of information among transects. We used this model to demonstrate winter severity effects on recruitment rates via an interaction between winter snow depth and minimum temperatures. In years with colder than average temperatures and above average snow depth, recruitment was depressed, whereas the negative effect of snow depth reversed in years with above average temperatures. This and other covariate information can be incorporated into the model to test relationships and provide predictions of future population change prior to setting of hunting seasons. Likewise, post hoc analysis of model output allows other hypothesis tests, such as determining the statistical support for whether population status has crossed a management trigger threshold. © 2020 The Wildlife Society.  相似文献   

3.
Fisheries assessment scientists can learn at least three lessons from the collapse of the northern cod off Newfoundland: (1) assessment errors can contribute to overfishing through optimistic long-term forecasts leading to the build-up of overcapacity or through optimistic assessments which lead to TACs being set higher than they should; (2) stock size overestimation is a major risk when commercial catch per effort is used as an abundance trend index, so there is continued need to invest in survey indices of abundance trend no matter what assessment methodology is used; and (3) the risk of recruitment overfishing exists and may be high even for very fecund species like cod. This implies that harvest rate targets should be lower than has often been assumed, especially when stock size assessments are uncertain. In the end, the high cost of information for accurate stock assessment may call for an alternative approach to management, involving regulation of exploitation rate via measures such as large-scale closures (refuges) that directly restrict the proportion of fish available to harvest. Development of predictive models for such regulatory options is a major challenge for fisheries assessment science.  相似文献   

4.
Abundance indices are widely used to study changes in population size in wildlife management. However, a truly appropriate measure of precision is often lacking in such studies. Statistically, the two crucial issues regarding the use of an abundance index are sampling and observability, which lead one to consider two kinds of errors, namely sampling and observation errors. The purpose of this methodological paper is to relate the number of counts to the precision of an abundance index by introducing the Hansen–Hurwitz–Bershad model which takes into account both sampling and observation errors. We illustrate this statistical approach in the case of a European rabbit (Oryctolagus cuniculus) abundance index based on spotlight counts, for two fixed spatial sampling units located in different ecological contexts. We show (i) that the usual sampling variance estimator is a downward-biased estimator of the total variance of the abundance index, (ii) that the bias of the usual variance estimator does not decrease when increasing the sampling size, (iii) that correlated observation errors may have a dramatic impact on the total variance, especially when the sampling size increases. The acknowledgement that the (pure) sampling variance underestimates the total variance because of observation errors is a statistical result that is neither widely known nor appreciated by most wildlife ecologists. The magnitude of this underestimation may be important and, therefore, observation errors cannot be always considered as a priori negligible in assessing the precision of a count-based abundance index.  相似文献   

5.
摘要 目的:探讨经阴道超声联合血清甲胎蛋白(AFP)、纤溶酶原激活物抑制物1(PAI-1)及巨噬细胞移动抑制因子(MMIF)对子宫内膜异位症的诊断价值。方法:选取我院2021年8月到2023年8月收治的150例子宫内膜异位症患者进行回顾性分析,分析其经阴道超声检查图像特征,并以病理诊断作为"金标准",分析阴道超声对宫内膜异位症的阳性检出率。依照子宫内膜异位症分期,将其分为Ⅰ~Ⅱ期组(n=77),Ⅲ~Ⅳ期组(n=73),另选取同期来我院体检的150例健康女性作为对照组。分析三组受检者血清AFP、PAI-1及MMIF表达水平,并采用Spearman相关分析法分析AFP、PAI-1及MMIF与子宫内膜异位症的相关性。最后建立受试者特征(ROC)工作曲线分析经阴道超声联合血清AFP、PAI-1及MMIF对子宫内膜异位症的诊断效能。结果:150例子宫内膜异位症患者均经病理诊断确诊,通过经阴道超声检查确诊为子宫内膜异位症的患者128例,85.33%。其中75例患者为卵巢型,超声显示巨大巧克力囊肿,内部可见大量细密点状回声与分隔光带。53例患者为子宫型,超声显示后壁腺肌瘤,内部回声不均匀,可见片状无回声区域;三组受检者血清AFP、PAI-1及MMIF表达水平对比差异显著,Ⅲ~Ⅳ期组明显高于Ⅰ~Ⅱ期组和对照组,差异具有统计学意义(P<0.05);Spearman相关分析结果显示:AFP、PAI-1及MMIF与子宫内膜异位症呈正相关(P<0.05);诊断灵敏度和特异度从低到高依次为MMIF(52.58%、64.32%)、PAI-1(60.03%、67.53%)、AFP(65.24%、71.27%)、经阴道超声(73.25%、86.36%)、经阴道超声联合血清AFP、PAI-1及MMIF(84.26%、98.63%)。经阴道超声联合血清AFP、PAI-1及MMIF的诊断灵敏度明显高于单一指标诊断(P<0.05)。结论:经阴道超声联合血清AFP、PAI-1及MMIF对子宫内膜异位症的诊断价值较高,其灵敏度和特异度分别为84.26%、98.63%,通过联合诊断可进一步辅助减少子宫内膜异位症的误诊和漏诊几率,为子宫内膜异位症的诊断与治疗提供重要参考。  相似文献   

6.
Material flow analysis (MFA) is widely used to investigate flows and stocks of resources or pollutants in a defined system. Data availability to quantify material flows on a national or global level is often limited owing to data scarcity or lacking data. MFA input data are therefore considered inherently uncertain. In this work, an approach to characterize the uncertainty of MFA input data is presented and applied to a case study on plastics flows in major Austrian consumption sectors in the year 2010. The developed approach consists of data quality assessment as a basis for estimating the uncertainty of input data. Four different implementations of the approach with respect to the translation of indicator scores to uncertainty ranges (linear‐ vs. exponential‐type functions) and underlying probability distributions (normal vs. log‐normal) are examined. The case study results indicate that the way of deriving uncertainty estimates for material flows has a stronger effect on the uncertainty ranges of the resulting plastics flows than the assumptions about the underlying probability distributions. Because these uncertainty estimates originate from data quality evaluation as well as uncertainty characterization, it is crucial to use a well‐defined approach, building on several steps to ensure the consistent translation of the data quality underlying material flow calculations into their associated uncertainties. Although subjectivity is inherent in uncertainty assessment in MFA, the proposed approach is consistent and provides a comprehensive documentation of the choices underlying the uncertainty analysis, which is essential to interpret the results and use MFA as a decision support tool.  相似文献   

7.
The virtual ecologist approach: simulating data and observers   总被引:3,自引:0,他引:3  
Ecologists carry a well‐stocked toolbox with a great variety of sampling methods, statistical analyses and modelling tools, and new methods are constantly appearing. Evaluation and optimisation of these methods is crucial to guide methodological choices. Simulating error‐free data or taking high‐quality data to qualify methods is common practice. Here, we emphasise the methodology of the ‘virtual ecologist’ (VE) approach where simulated data and observer models are used to mimic real species and how they are ‘virtually’ observed. This virtual data is then subjected to statistical analyses and modelling, and the results are evaluated against the ‘true’ simulated data. The VE approach is an intuitive and powerful evaluation framework that allows a quality assessment of sampling protocols, analyses and modelling tools. It works under controlled conditions as well as under consideration of confounding factors such as animal movement and biased observer behaviour. In this review, we promote the approach as a rigorous research tool, and demonstrate its capabilities and practical relevance. We explore past uses of VE in different ecological research fields, where it mainly has been used to test and improve sampling regimes as well as for testing and comparing models, for example species distribution models. We discuss its benefits as well as potential limitations, and provide some practical considerations for designing VE studies. Finally, research fields are identified for which the approach could be useful in the future. We conclude that VE could foster the integration of theoretical and empirical work and stimulate work that goes far beyond sampling methods, leading to new questions, theories, and better mechanistic understanding of ecological systems.  相似文献   

8.
Study experience of ecologist plays an important role in assessing the contribution of different influencing factors to ecological vulnerability, helping policy makers to target measures for ecological restoration. However, uncertainty is unavoidable due to variation of study experience among experts. In this study, a new method that combines Delphi survey, geographic information system and Monte Carlo simulation was proposed to assess regional ecological vulnerability and to quantify the uncertainty of assessing result. We illustrated the capacity of this method by using a case study in northeastern Inner Mongolia, China. An index system for 13 spatial variables was established to calculate an ecological vulnerability index (EVI) from the three aspects of ecological sensitivity (ES), ecological resilience (ER) and natural-social pressure (NSP). The assessment shows that the southwestern region of the study area, especially in the counties of Sonid Left and Right, was seriously threatened by a high ES and a low ER. Onguiud county in the Greater Hinggan Mountains had a high EVI due to an intensive NSP. Based on the assessing result and regional road distribution, an EVI cost curve was created to facilitate the prioritization of allocating limited funds among the various counties for roadside ecological restoration.  相似文献   

9.
复合种群管理的风险评估——以日本鲐为例   总被引:3,自引:0,他引:3  
官文江  高峰  李纲  陈新军 《生态学报》2014,34(13):3682-3692
单一种群是目前渔业资源评估的基本假设,但渔业资源常由多个地方种群或产卵种群组成,并且种群间存在交流,构成复合种群。根据复合种群概念,以东、黄海日本鲐为例,对其12种种群动态情况进行了模拟。利用模拟所得的数据及剩余产量模型,分别分析了在复合种群、两独立种群及单一种群假设下所设置的10种评估管理方案,结果表明:(1)基于复合种群假设的评估管理方案与模拟的种群动态一致,在单位捕捞努力量渔获量(CPUE)观测误差较小情况下,该方案为最佳方案,可获得最大可持续产量,但随CPUE观测误差增大,该方案种群灭绝率增大,管理效果随之退化。(2)基于两独立种群假设的评估管理方案均使资源过度开发,不利于资源可持续利用。(3)在单一种群假设下,选择不同CPUE作为资源指数和采用不同捕捞量分配方法的评估管理方案存在过度捕捞和开发不足两种状况,其管理效果受种群本身参数及空间交换率等因素的影响而不同;若采用的CPUE反映部分种群动态信息,则其评估管理方案至少在一种模拟情况下出现种群100%灭绝;若CPUE能反映整个种群资源量的动态变化,且捕捞量能按种群的空间结构进行分配,则管理效果与(1)类似,但不能获得最大可持续产量,若忽略种群的空间结构影响而均匀分配捕捞量,则至少在一种模拟情况下出现种群100%灭绝。据此,对于复合种群的管理,建议:(A)如果种群数据收集及数据精度能得到保证,该资源的评估与管理应基于复合种群假设;(B)如果目前收集种群数据存在较大困难,且CPUE数据存在较大误差,则可采用单一种群假设,但必须设定更保守的捕捞量和采用基于种群空间结构的总许可渔获量(TAC)管理方案;(C)在制定渔业管理政策时,应结合种群生态、数据、模型假设及参数估计方法等方面的不确定性对管理控制规则进行系统的管理策略评价以避免风险。  相似文献   

10.
The European Water Framework Directive has required chemical and biological assessments in waterbodies. While studies of water chemistry uncertainties have existed for a long time, few studies have been carried out in hydrobiology. Our aim was to study the role of uncertainties defined as any action that may cause a data error on the French index “Indice Biologique des Macrophytes de Rivières” IBMR based on the macrophyte compartment. IBMR gives the trophic status of the river. The selected uncertainties were based on the surveyor effect both in situ and in laboratory, such as taxa omission, species identification error and cover class error. We proposed an innovative approach close to sensitivity analysis using controlled virtual changes in taxa identification and cover classes based on two confusion matrices. The creation of new experimental floristic lists and the calculation of metrics according to random specified errors allowed us to measure the effect of these errors on the IBMR and the trophic status. The taxa identification errors and combined errors (taxa identification and cover class) always had a stronger impact than cover class errors. To limit their impact, surveyor training, confrontation between surveyors and a quality control approach could be applied.  相似文献   

11.
Species distribution models (SDMs) have become one of the major predictive tools in ecology. However, multiple methodological choices are required during the modelling process, some of which may have a large impact on forecasting results. In this context, virtual species, i.e. the use of simulations involving a fictitious species for which we have perfect knowledge of its occurrence–environment relationships and other relevant characteristics, have become increasingly popular to test SDMs. This approach provides for a simple virtual ecologist framework under which to test model properties, as well as the effects of the different methodological choices, and allows teasing out the effects of targeted factors with great certainty. This simplification is therefore very useful in setting up modelling standards and best practice principles. As a result, numerous virtual species studies have been published over the last decade. The topics covered include differences in performance between statistical models, effects of sample size, choice of threshold values, methods to generate pseudo‐absences for presence‐only data, among many others. These simulations have therefore already made a great contribution to setting best modelling practices in SDMs. Recent software developments have greatly facilitated the simulation of virtual species, with at least three different packages published to that effect. However, the simulation procedure has not been homogeneous, which introduces some subtleties in the interpretation of results, as well as differences across simulation packages. Here we 1) review the main contributions of the virtual species approach in the SDM literature; 2) compare the major virtual species simulation approaches and software packages; and 3) propose a set of recommendations for best simulation practices in future virtual species studies in the context of SDMs.  相似文献   

12.
Estimates of landscape connectivity are routinely used to inform decision-making by conservation biologists. Most estimates of connectivity rely on cost-surfaces: raster representations of landscapes in which cost values represent the difficulty involved with traversing an area. However, there is considerable uncertainty in the generation of cost-surfaces that have not been widely explored. We investigated the effects of four potential sources of uncertainty in the creation of cost-surfaces: 1) number of landscape classes represented; 2) spatial resolution (grain size); 3) misclassification of edges between landscape classes; and 4) cost values selected for each landscape class. Following a factorial design we simulated multiple cost-surface pairs, each comprising one true surface with no errors and one surface with uncertainty comprised of some combination of the four error sources. We evaluated the relative importance of each source of uncertainty in determining the difference between the least-cost paths (LCPs) costs and resistance distances generated for the true and erroneous cost-surfaces, using four model evaluation metrics. Errors in the underlying geospatial layers produced larger inaccuracies in connectivity estimates than those produced by cost-value errors. Incorrect grain size had the largest overall effect on the accuracy of connectivity estimates. Though the removal of an element class was found to have a large effect on the configuration of connectivity estimates, and the addition of an element class had a large effect on estimates configuration. Our results highlight the importance of minimising and quantifying the uncertainty inherent in the geospatial data used to develop cost-surfaces.  相似文献   

13.
Uncertainty calculation in life cycle assessments   总被引:1,自引:0,他引:1  
Goal and Background  Uncertainty is commonly not taken into account in LCA studies, which downgrades their usability for decision support. One often stated reason is a lack of method. The aim of this paper is to develop a method for calculating the uncertainty propagation in LCAs in a fast and reliable manner. Approach  The method is developed in a model that reflects the calculation of an LCA. For calculating the uncertainty, the model combines approximation formulas and Monte Carlo Simulation. It is based on virtual data that distinguishes true values and random errors or uncertainty, and that hence allows one to compare the performance of error propagation formulas and simulation results. The model is developed for a linear chain of processes, but extensions for covering also branched and looped product systems are made and described. Results  The paper proposes a combined use of approximation formulas and Monte Carlo simulation for calculating uncertainty in LCAs, developed primarily for the sequential approach. During the calculation, a parameter observation controls the performance of the approximation formulas. Quantitative threshold values are given in the paper. The combination thus transcends drawbacks of simulation and approximation. Conclusions and Outlook  The uncertainty question is a true jigsaw puzzle for LCAs and the method presented in this paper may serve as one piece in solving it. It may thus foster a sound use of uncertainty assessment in LCAs. Analysing a proper management of the input uncertainty, taking into account suitable sampling and estimation techniques; using the approach for real case studies, implementing it in LCA software for automatically applying the proposed combined uncertainty model and, on the other hand, investigating about how people do decide, and should decide, when their decision relies on explicitly uncertain LCA outcomes-these all are neighbouring puzzle pieces inviting to further work.  相似文献   

14.
摘要 目的:探讨生育指数(EFI)评分、血清巨噬细胞移动抑制因子(MMIF)、甲壳质酶蛋白 40(YKL-40)与子宫内膜异位症患者卵巢功能、临床分期的关系。方法:选取我院2020年6月到2023年6月收治的80例子宫内膜异位症患者,分为观察组,另选取同期来我院体检的80名健康女性志愿者作为对照组。对比两组受检者EFI评分、血清MMIF、YKL-40及卵巢功能指标表达水平。应用美国生殖医学协会子宫内膜异位症分期标准(r-AFS)对80例子宫内膜异位症患者进行分期,其中Ⅰ期15例,Ⅱ期23例,Ⅲ期25例,Ⅳ期17例,对比不同r-AFS分期患者EFI评分、血清MMIF、YKL-40表达水平。并采用Spearman相关法分析法分析EFI评分、血清MMIF、YKL-40与卵巢功能指标的相关性。结果:观察组患者EFI评分低于对照组,血清MMIF、YKL-40表达水平高于对照组(P<0.05);观察组抗苗勒氏管激素(AMH)、黄体生成素(LH)、卵泡刺激素(FSH)表达水平低于对照组,雌二醇(Estradiol,E2)水平高于对照组(P<0.05);不同分期子宫内膜异位症患者EFI评分由高到低分别为Ⅰ期、Ⅱ期、Ⅲ期、Ⅳ期,MMIF和YKL-40水平由高到低分别为Ⅳ期、Ⅲ期、Ⅱ期、Ⅰ期,不同分期EFI评分、血清MMIF、YKL-40表达水平对比差异显著(P<0.05);Spearman相关法分析结果表明:EFI评分与AMH、LH、FSH呈正相关,与E2呈负相关(P<0.05),MMIF和YKL-40与AMH、LH、FSH呈负相关,与E2呈正相关(P<0.05)。结论:EFI评分、血清MMIF、YKL-40水平与子宫内膜异位症患者的卵巢具有明显相关性,且不同子宫内膜异位症患者EFI评分、血清MMIF、YKL-40水平具有显著差异,临床可考虑应用EFI评分、血清MMIF、YKL-40水平来辅助判断子宫内膜异位症患者的卵巢功能与疾病严重程度。  相似文献   

15.
In ecology, multi-scale analyses are commonly performed to identify the scale at which a species interacts with its environment (intrinsic scale). This is typically carried out using multi-scale species–environment models that compare the relationship between ecological attributes (e.g., species diversity) measured with point data to environmental data (e.g. vegetation cover) for the surrounding area within buffers of multiple sizes. The intrinsic scale is identified as the buffer size at which the highest correlation between environmental and ecological variables occurs. We present the first investigation of how the spatial resolution of remote sensing environmental data can influence the identification of the intrinsic scale using multi-scale species–environment models. Using the virtual ecologist approach we tested this influence using vegetation cover spatial data and a simulated species–environment relationship derived from the same spatial data. By using a simulation model there was a known truth to use as a benchmark to measure accuracy. Our findings indicate that by varying the spatial resolution of the environmental data, the intrinsic scale may be incorrectly identified. In some cases, the errors in the intrinsic scale identified were close to the maximum value possible that could be measured by this experiment. Consequently, multi-scale ecological analyses may not be suitable for distinguishing scale patterns caused by the relationship between an organism and its environment from scale patterns caused by the effect of changing spatial resolution: a phenomenon referred to as the modifiable areal unit problem (MAUP). Thus, observed scale-dependent ecological patterns may be an artefact of the observation of ecological data, not the ecological phenomenon. This study concludes with some suggestions for future work to quantify the effect of the MAUP on multi-scale studies and develop generalisations that can be used to assess when multi-scale analyses have the potential to produce spurious results.  相似文献   

16.
Abstract: Obtaining reliable results from life-cycle assessment studies is often quite difficult because life-cycle inventory (LCI) data are usually erroneous, incomplete, and even physically meaningless. The real data must satisfy the laws of thermodynamics, so the quality of LCI data may be enhanced by adjusting them to satisfy these laws. This is not a new idea, but a formal thermodynamically sound and statistically rigorous approach for accomplishing this task is not yet available. This article proposes such an approach based on methods for data rectification developed in process systems engineering. This approach exploits redundancy in the available data and models and solves a constrained optimization problem to remove random errors and estimate some missing values. The quality of the results and presence of gross errors are determined by statistical tests on the constraints and measurements. The accuracy of the rectified data is strongly dependent on the accuracy and completeness of the available models, which should capture information such as the life-cycle network, stream compositions, and reactions. Such models are often not provided in LCI databases, so the proposed approach tackles many new challenges that are not encountered in process data rectification. An iterative approach is developed that relies on increasingly detailed information about the life-cycle processes from the user. A comprehensive application of the method to the chlor-alkali inventory being compiled by the National Renewable Energy Laboratory demonstrates the benefits and challenges of this approach.  相似文献   

17.
The use of the CEN (European Committee for Standardization) standard method for sampling fish in lakes using multi-mesh gillnets allowed the collection of fish assemblages of 445 European lakes in 12 countries. The lakes were additionally characterised by environmental drivers and eutrophication proxies. Following a site-specific approach including a validation procedure, a fish index including two abundance metrics (catch per unit effort expressed as fish number and biomass) and one functional metric of composition (abundance of omnivorous fish) was developed. Correlated with the proxy of eutrophication, this index discriminates between heavily and moderately impacted lakes. Additional analyses on a subset of data from Nordic lakes revealed a stronger correlation between the new fish index and the pressure data. Despite an uneven geographical distribution of the lakes and certain shortcomings in the environmental and pressure data, the fish index proved to be useful for ecological status assessment of lakes applying standardised protocols and thus supports the development of national lake fish assessment tools in line with the European Water Framework Directive.  相似文献   

18.

Background, aim, and scope

Many studies evaluate the results of applying different life cycle impact assessment (LCIA) methods to the same life cycle inventory (LCI) data and demonstrate that the assessment results would be different with different LICA methods used. Although the importance of uncertainty is recognized, most studies focus on individual stages of LCA, such as LCI and normalization and weighting stages of LCIA. However, an important question has not been answered in previous studies: Which part of the LCA processes will lead to the primary uncertainty? The understanding of the uncertainty contributions of each of the LCA components will facilitate the improvement of the credibility of LCA.

Methodology

A methodology is proposed to systematically analyze the uncertainties involved in the entire procedure of LCA. The Monte Carlo simulation is used to analyze the uncertainties associated with LCI, LCIA, and the normalization and weighting processes. Five LCIA methods are considered in this study, i.e., Eco-indicator 99, EDIP, EPS, IMPACT 2002+, and LIME. The uncertainty of the environmental performance for individual impact categories (e.g., global warming, ecotoxicity, acidification, eutrophication, photochemical smog, human health) is also calculated and compared. The LCA of municipal solid waste management strategies in Taiwan is used as a case study to illustrate the proposed methodology.

Results

The primary uncertainty source in the case study is the LCI stage under a given LCIA method. In comparison with various LCIA methods, EDIP has the highest uncertainty and Eco-indicator 99 the lowest uncertainty. Setting aside the uncertainty caused by LCI, the weighting step has higher uncertainty than the normalization step when Eco-indicator 99 is used. Comparing the uncertainty of various impact categories, the lowest is global warming, followed by eutrophication. Ecotoxicity, human health, and photochemical smog have higher uncertainty.

Discussion

In this case study of municipal waste management, it is confirmed that different LCIA methods would generate different assessment results. In other words, selection of LCIA methods is an important source of uncertainty. In this study, the impacts of human health, ecotoxicity, and photochemical smog can vary a lot when the uncertainties of LCI and LCIA procedures are considered. For the purpose of reducing the errors of impact estimation because of geographic differences, it is important to determine whether and which modifications of assessment of impact categories based on local conditions are necessary.

Conclusions

This study develops a methodology of systematically evaluating the uncertainties involved in the entire LCA procedure to identify the contributions of different assessment stages to the overall uncertainty. Which modifications of the assessment of impact categories are needed can be determined based on the comparison of uncertainty of impact categories.

Recommendations and perspectives

Such an assessment of the system uncertainty of LCA will facilitate the improvement of LCA. If the main source of uncertainty is the LCI stage, the researchers should focus on the data quality of the LCI data. If the primary source of uncertainty is the LCIA stage, direct application of LCIA to non-LCIA software developing nations should be avoided.  相似文献   

19.
The effect of elevated carbon dioxide (CO2) on crop yields is one of the most uncertain and influential parameters in models used to assess climate change impacts and adaptations. A primary reason for this uncertainty is the limited availability of experimental data on CO2 responses for crops grown under typical field conditions. However, because of historical variations in CO2, each year farmers throughout the world perform uncontrolled yield ‘experiments’ under different levels of CO2. In this study, measurements of atmospheric CO2 growth rates and crop yields for individual countries since 1961 were compared to empirically determine the average effect of a 1 ppm increase of CO2 on yields of rice, wheat, and maize. Because the gradual increase in CO2 is highly correlated with major changes in technology, management, and other yield controlling factors, we focused on first differences of CO2 and yield time series. Estimates of CO2 responses obtained from this approach were highly uncertain, reflecting the relatively small importance of year‐to‐year CO2 changes for yield variability. Combining estimates from the top 20 countries for each crop resulted in estimates with substantially less uncertainty than from any individual country. The results indicate that while current datasets cannot reliably constrain estimates beyond previous experimental studies, an empirical approach supported by large amounts of data may provide a potentially valuable and independent assessment of this critical model parameter. For example, analysis of reliable yield records from hundreds of individual, independent locations (as opposed to national scale yield records with poorly defined errors) may result in empirical estimates with useful levels of uncertainty to complement estimates from experimental studies.  相似文献   

20.
Species occurrences inherently include positional error. Such error can be problematic for species distribution models (SDMs), especially those based on fine-resolution environmental data. It has been suggested that there could be a link between the influence of positional error and the width of the species ecological niche. Although positional errors in species occurrence data may imply serious limitations, especially for modelling species with narrow ecological niche, it has never been thoroughly explored. We used a virtual species approach to assess the effects of the positional error on fine-scale SDMs for species with environmental niches of different widths. We simulated three virtual species with varying niche breadth, from specialist to generalist. The true distribution of these virtual species was then altered by introducing different levels of positional error (from 5 to 500 m). We built generalized linear models and MaxEnt models using the distribution of the three virtual species (unaltered and altered) and a combination of environmental data at 5 m resolution. The models’ performance and niche overlap were compared to assess the effect of positional error with varying niche breadth in the geographical and environmental space. The positional error negatively impacted performance and niche overlap metrics. The amplitude of the influence of positional error depended on the species niche, with models for specialist species being more affected than those for generalist species. The positional error had the same effect on both modelling techniques. Finally, increasing sample size did not mitigate the negative influence of positional error. We showed that fine-scale SDMs are considerably affected by positional error, even when such error is low. Therefore, where new surveys are undertaken, we recommend paying attention to data collection techniques to minimize the positional error in occurrence data and thus to avoid its negative effect on SDMs, especially when studying specialist species.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号