首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Hybrid Framework for Managing Uncertainty in Life Cycle Inventories   总被引:1,自引:0,他引:1  
Life cycle assessment (LCA) is increasingly being used to inform decisions related to environmental technologies and polices, such as carbon footprinting and labeling, national emission inventories, and appliance standards. However, LCA studies of the same product or service often yield very different results, affecting the perception of LCA as a reliable decision tool. This does not imply that LCA is intrinsically unreliable; we argue instead that future development of LCA requires that much more attention be paid to assessing and managing uncertainties. In this article we review past efforts to manage uncertainty and propose a hybrid approach combining process and economic input–output (I‐O) approaches to uncertainty analysis of life cycle inventories (LCI). Different categories of uncertainty are sometimes not tractable to analysis within a given model framework but can be estimated from another perspective. For instance, cutoff or truncation error induced by some processes not being included in a bottom‐up process model can be estimated via a top‐down approach such as the economic I‐O model. A categorization of uncertainty types is presented (data, cutoff, aggregation, temporal, geographic) with a quantitative discussion of methods for evaluation, particularly for assessing temporal uncertainty. A long‐term vision for LCI is proposed in which hybrid methods are employed to quantitatively estimate different uncertainty types, which are then reduced through an iterative refinement of the hybrid LCI method.  相似文献   

3.
We develop a hybrid‐unit energy input‐output (I/O) model with a disaggregated electricity sector for China. The model replaces primary energy rows in monetary value, namely, coal, gas, crude oil, and renewable energy, with physical flow units in order to overcome errors associated with the proportionality assumption in environmental I/O analysis models. Model development and data use are explained and compared with other approaches in the field of environmental life cycle assessment. The model is applied to evaluate the primary energy embodied in economic output to meet Chinese final consumption for the year 2007. Direct and indirect carbon dioxide emissions intensities are determined. We find that different final demand categories pose distinctive requirements on the primary energy mix. Also, a considerable amount of energy is embodied in the supply chain of secondary industries. Embodied energy and emissions are crucial to consider for policy development in China based on consumption, rather than production. Consumption‐based policies will likely play a more important role in China when per capita income levels have reached those of western countries.  相似文献   

4.
This research provides a systematic review and harmonization of the life cycle assessment (LCA) literature of electricity generated from conventionally produced natural gas. We focus on estimates of greenhouse gases (GHGs) emitted in the life cycle of electricity generation from natural gas‐fired combustion turbine (NGCT) and combined‐cycle (NGCC) systems. The smaller set of LCAs of liquefied natural gas power systems and natural gas plants with carbon capture and storage were also collected, but analyzed to a lesser extent. A meta‐analytical process we term “harmonization” was employed to align several system boundaries and technical performance parameters to better allow for cross‐study comparisons, with the aim of clarifying central tendency and reducing variability in estimates of life cycle GHG emissions. Of over 250 references identified, 42 passed screens for technological relevance and study quality, providing a total of 69 estimates for NGCT and NGCC. Harmonization increased the median estimates in each category as a result of several factors not typically considered in the previous research, including the regular clearing of liquids from a well, and consolidated the interquartile range for NGCC to 420 to 480 grams of carbon dioxide equivalent per kilowatt‐hour (g CO2‐eq/kWh) and for NGCT to 570 to 750 g CO2‐eq/kWh, with medians of 450 and 670 CO2‐eq/kWh, respectively. Harmonization of thermal efficiency had the largest effect in reducing variability; methane leakage rate is likely similarly influential, but was unharmonized in this assessment as a result of the significant current uncertainties in its estimation, an area that is justifiably receiving significant research attention.  相似文献   

5.
Most studies dealing with home ranges consider the study areas as if they were totally flat, working only in two dimensions, when in reality they are irregular surfaces displayed in three dimensions. By disregarding the third dimension (i.e., topography), the size of home ranges underestimates the surface actually occupied by the animal, potentially leading to misinterpretations of the animals' ecological needs. We explored the influence of considering the third dimension in the estimation of home‐range size by modeling the variation between the planimetric and topographic estimates at several spatial scales. Our results revealed that planimetric approaches underestimate home‐range size estimations, which range from nearly zero up to 22%. The difference between planimetric and topographic estimates of home‐ranges sizes produced highly robust models using the average slope as the sole independent factor. Moreover, our models suggest that planimetric estimates in areas with an average slope of 16.3° (±0.4) or more will incur in errors ≥5%. Alternatively, the altitudinal range can be used as an indicator of the need to include topography in home‐range estimates. Our results confirmed that home‐range estimates could be significantly biased when topography is disregarded. We suggest that study areas where home‐range studies will be performed should firstly be scoped for its altitudinal range, which can serve as an indicator for the need for posterior use of average slope values to model the surface area used and/or available for the studied animals.  相似文献   

6.
Renewable energy systems are essential in coming years to ensure an efficient energy supply while maintaining environmental protection. Despite having low environmental impacts during operation, other phases of the life cycle need to be accounted for. This study presents a geo‐located life cycle assessment of an emerging technology, namely, floating offshore wind farms. It is developed and applied to a pilot project in the Mediterranean Sea. The materials inventory is based on real data from suppliers and coupled to a parameterized model which exploits a geographic information system wind database to estimate electricity production. This multi‐criteria assessment identified the extraction and transformation of materials as the main contributor to environmental impacts such as climate change (70% of the total 22.3 g CO2 eq/kWh), water use (73% of 6.7 L/kWh), and air quality (76% of 25.2 mg PM2.5/kWh), mainly because of the floater's manufacture. The results corroborate the low environmental impact of this emerging technology compared to other energy sources. The electricity production estimates, based on geo‐located wind data, were found to be a critical component of the model that affects environmental performance. Sensitivity analyses highlighted the importance of the project's lifetime, which was the main parameter responsible for variations in the analyzed categories. Background uncertainties should be analyzed but may be reduced by focusing data collection on significant contributors. Geo‐located modeling proved to be an effective technique to account for geographical variability of renewable energy technologies and contribute to decision‐making processes leading to their development.  相似文献   

7.
R. G. Blanks 《Cytopathology》2011,22(3):146-154
R. G. Blanks
Estimation of disease severity in the NHS cervical screening programme. Part I: artificial cut‐off points and semi‐quantitative solutions Objective: Current cytology and histology classifications are based on ordered categories and have a strong emphasis on providing information that decides a woman's management rather than the best estimate of disease severity. This two‐part paper explores the use of a quantitative approach to both cytology and histology disease severity measurements. Methods: In Part I the problem of artificial cut‐off points is discussed and a simple semi‐quantitative solution to the problem is proposed. This closely relates to the revised British Society for Clinical Cytology (BSCC) terminology. The estimates of disease severity are designed as extensions of the existing methods, with an emphasis on probability rather than certainty, as a more natural way of approaching the problem. Borderline changes are treated as categorical variables, but koilocytosis, mild, moderate and severe dyskaryosis, and ?invasive as quasi‐continuous and the disease severity estimated as a grade number (GN) with any value between 0–4 and the margin of error as a calculated grade range (CGR). Results: As an example, if the reader is unsure between moderate dyskaryosis (HSIL favouring CIN2) and mild dyskaryosis (LSIL favouring CIN1) they can register this uncertainty as a probability, such as 60%/40% moderate/mild. With 2 and 1 as the mid‐points of the grade numbers for moderate and mild dyskaryosis the GN value is ((60 × 2) + (40 × 1))/100 = 1.6. The CGR is 1.5 ? 0.4 to 1.5 + 0.6 = 1.1 to 2.1. The GN (CGR) estimate of disease severity is therefore 1.6 (1.1–2.1). In a similar manner the disease severity from all slides showing koilocytosis or dyskaryosis can be estimated as a number between 0 and 4 with an associated error. Histology can be treated in a similar way. Conclusions: This semi‐quantitative approach provides a framework more suitable for research and audit of disease severity estimates. It avoids the paradox inherent in the current systems using artificial cut‐points to produce categories whereby increasing agreement can only be achieved by losing information.  相似文献   

8.
Objective: Accelerometers offer considerable promise for improving estimates of physical activity (PA) and energy expenditure (EE) in free‐living subjects. Differences in calibration equations and cut‐off points have made it difficult to determine the most accurate way to process these data. The objective of this study was to compare the accuracy of various calibration equations and algorithms that are currently used with the MTI Actigraph (MTI) and the Sensewear Pro II (SP2) armband monitor. Research Methods and Procedures: College‐age participants (n = 30) wore an MTI and an SP2 while participating in normal activities of daily living. Activity patterns were simultaneously monitored with the Intelligent Device for Estimating Energy Expenditure and Activity (IDEEA) monitor to provide an accurate estimate (criterion measure) of EE and PA for this field‐based method comparison study. Results: The EE estimates from various MTI equations varied considerably, with mean differences ranging from ?1.10 to 0.46 METS. The EE estimates from the two SP2 equations were within 0.10 METS of the value from the IDEEA. Estimates of time spent in PA from the MTI and SP2 ranged from 34.3 to 107.1 minutes per day, while the IDEEA yielded estimates of 52 minutes per day. Discussion: The lowest errors in estimation of time spent in PA and the highest correlations were found for the new SP2 equation and for the recently proposed MTI cut‐off point of 760 counts/min (Matthews, 2005). The study indicates that the Matthews MTI cut‐off point and the new SP2 equation provide the most accurate indicators of PA.  相似文献   

9.
Unbiased estimation of individual asymmetry   总被引:1,自引:0,他引:1  
The importance of measurement error (ME) for the estimation of population level fluctuating asymmetry (FA) has long been recognized. At the individual level, however, this aspect has been studied in less detail. Recently, it has been shown that the random slopes of a mixed regression model can estimate individual asymmetry levels that are unbiased with respect to ME. Yet, recent studies have shown that such estimates may fail to reflect heterogeneity in these effects. In this note I show that this is not the case for the estimation of individual asymmetry. The random slopes adequately reflect between‐individual heterogeneity in the underlying developmental instability. Increased levels of ME resulted in, on average, lower estimates of individual asymmetry relative to the traditional unsigned asymmetry. This well‐known shrinkage effect in Bayesian analysis adequately corrected for ME and heterogeneity in ME resulting in unbiased estimates of individual asymmetry that were more closely correlated with the true underlying asymmetry.  相似文献   

10.
Industrial assets or fixed capital stocks are at the core of the transition to a low‐carbon economy. They represent substantial accumulations of capital, bulk materials, and critical metals. Their lifetime determines the potential for material recycling and how fast they can be replaced by new, more efficient facilities. Their efficiency determines the coupling between useful output and energy and material throughput. A sound understanding of the economic and physical properties of fixed capital stocks is essential to anticipating the long‐term environmental and economic consequences of the new energy future. We identify substantial overlap in the way stocks are modeled in national accounting, dynamic material flow analysis, dynamic input‐output (I/O) analysis, and life cycle assessment (LCA) and we merge these concepts into a common framework for modeling fixed capital stocks. We demonstrate the usefulness of the framework for simultaneous accounting of capital and material stocks and for consequential LCA. We apply the framework to design a demand‐driven dynamic I/O model with dynamic capital stocks, and we synthesize both the marginal and attributional matrix of technical coefficients (A‐matrix) from detailed process inventories of fixed assets of different age cohorts and technologies. The stock modeling framework allows researchers to identify and exploit synergies between different model families under the umbrella of socioeconomic metabolism.  相似文献   

11.
Primatologists have long focused on grooming exchanges to examine aspects of social relationships, co‐operation, and social cognition. One particular interest is the extent to which reciprocating grooming partners time match, and the time frame over which they do so. Conclusions about time matching vary across species. Generally, researchers focus on the duration of pauses between grooming episodes that involve a switch in partner roles and choose a cut‐off point to distinguish short from longer‐term reciprocation. Problematically, researchers have made inconsistent choices about cut‐offs. Such methodological variations are potentially concerning, as it is unclear whether inconsistent conclusions about short‐term time matching are attributable to species/ecological differences, or are due in part to methodological inconsistency. We ask whether various criteria for separating short versus long‐term reciprocation influence conclusions about short‐term time matching using data from free‐ranging rhesus ( Macaca mulatta) and captive‐crested macaques ( Macaca nigra). We compare several commonly used cut‐offs to ones generated by the currently preferred approach—survival analysis. Crested macaques displayed a mild degree of time matching regardless of the cutoff used. For rhesus macaques, whereas most cut‐offs yielded similar degrees of time matching as the one derived from survival analysis, very short ones significantly underestimated both the degree of time matching and the influence of rank distance on time matching. Although researchers may have some flexibility in their choice of cut‐offs, we suggest that they employ caution by using survival analysis when possible, and when not possible, by avoiding very short time windows.  相似文献   

12.
Repeatability (more precisely the common measure of repeatability, the intra‐class correlation coefficient, ICC) is an important index for quantifying the accuracy of measurements and the constancy of phenotypes. It is the proportion of phenotypic variation that can be attributed to between‐subject (or between‐group) variation. As a consequence, the non‐repeatable fraction of phenotypic variation is the sum of measurement error and phenotypic flexibility. There are several ways to estimate repeatability for Gaussian data, but there are no formal agreements on how repeatability should be calculated for non‐Gaussian data (e.g. binary, proportion and count data). In addition to point estimates, appropriate uncertainty estimates (standard errors and confidence intervals) and statistical significance for repeatability estimates are required regardless of the types of data. We review the methods for calculating repeatability and the associated statistics for Gaussian and non‐Gaussian data. For Gaussian data, we present three common approaches for estimating repeatability: correlation‐based, analysis of variance (ANOVA)‐based and linear mixed‐effects model (LMM)‐based methods, while for non‐Gaussian data, we focus on generalised linear mixed‐effects models (GLMM) that allow the estimation of repeatability on the original and on the underlying latent scale. We also address a number of methods for calculating standard errors, confidence intervals and statistical significance; the most accurate and recommended methods are parametric bootstrapping, randomisation tests and Bayesian approaches. We advocate the use of LMM‐ and GLMM‐based approaches mainly because of the ease with which confounding variables can be controlled for. Furthermore, we compare two types of repeatability (ordinary repeatability and extrapolated repeatability) in relation to narrow‐sense heritability. This review serves as a collection of guidelines and recommendations for biologists to calculate repeatability and heritability from both Gaussian and non‐Gaussian data.  相似文献   

13.
Aim The value of biodiversity informatics rests upon the capacity to assess data quality. Yet as these methods have developed, investigating the quality of the underlying specimen data has largely been neglected. Using an exceptionally large, densely sampled specimen data set for non‐flying small mammals of Utah, I evaluate measures of uncertainty associated with georeferenced localities and illustrate the implications of uncritical incorporation of data in the analysis of patterns of species richness and species range overlap along elevational gradients. Location Utah, USA, with emphasis on the Uinta Mountains. Methods Employing georeferenced specimen data from the Mammal Networked Information System (MaNIS), I converted estimates of areal uncertainty into elevational uncertainty using a geographic information system (GIS). Examining patterns in both areal and elevational uncertainty measures, I develop criteria for including localities in analyses along elevational gradients. Using the Uinta Mountains as a test case, I then examine patterns in species richness and species range overlap along an elevational gradient, with and without accounting for data quality. Results Using a GIS, I provide a framework for post‐hoc 3‐dimensional georeferencing and demonstrate collector‐recorded elevations as a valuable technique for detecting potential errors in georeferencing. The criteria established for evaluating data quality when analysing patterns of species richness and species range overlap in the Uinta Mountains test case reduced the number of localities by 44% and the number of associated specimens by 22%. Decreasing the sample size in this manner resulted in the subsequent removal of one species from the analysis. With and without accounting for data quality, the pattern of species richness along the elevational gradient was hump‐shaped with a peak in richness at about mid‐elevation, between 2300 and 2600 m. In contrast, the frequencies of different pair‐wise patterns of elevational range overlap among species differed significantly when data quality was and was not accounted for. Main conclusions These results indicate that failing to assess spatial error in data quality did not alter the shape of the observed pattern in species richness along the elevational gradient nor the pattern of species’ first and last elevational occurrences. However, it did yield misleading estimates of species richness and community composition within a given elevational interval, as well as patterns of elevational range overlap among species. Patterns of range overlap among species are often used to infer processes underlying species distributions, suggesting that failure to account for data quality may alter interpretations of process as well as perceived patterns of distribution. These results illustrate that evaluating the quality of the underlying specimen data is a necessary component of analyses incorporating biodiversity informatics.  相似文献   

14.
Marques TA 《Biometrics》2004,60(3):757-763
Line transect sampling is one of the most widely used methods for animal abundance assessment. Standard estimation methods assume certain detection on the transect, no animal movement, and no measurement errors. Failure of the assumptions can cause substantial bias. In this work, the effect of error measurement on line transect estimators is investigated. Based on considerations of the process generating the errors, a multiplicative error model is presented and a simple way of correcting estimates based on knowledge of the error distribution is proposed. Using beta models for the error distribution, the effect of errors and of the proposed correction is assessed by simulation. Adequate confidence intervals for the corrected estimates are obtained using a bootstrap variance estimate for the correction and the delta method. As noted by Chen (1998, Biometrics 54, 899-908), even unbiased estimators of the distances might lead to biased density estimators, depending on the actual error distribution. In contrast with the findings of Chen, who used an additive model, unbiased estimation of distances, given a multiplicative model, lead to overestimation of density. Some error distributions result in observed distance distributions that make efficient estimation impossible, by removing the shoulder present in the original detection function. This indicates the need to improve field methods to reduce measurement error. An application of the new methods to a real data set is presented.  相似文献   

15.
Conventional process-analysis-type techniques for compiling life-cycle inventories suffer from a truncation error, which is caused by the omission of resource requirements or pollutant releases of higher-order upstream stages of the production process. The magnitude of this truncation error varies with the type of product or process considered, but can be on the order of 50%. One way to avoid such significant errors is to incorporate input-output analysis into the assessment framework, resulting in a hybrid life-cycle inventory method. Using Monte-Carlo simulations, it can be shown that uncertainties of input-output– based life-cycle assessments are often lower than truncation errors in even extensive, third-order process analyses.  相似文献   

16.
A diagnostic cut‐off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut‐off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity ?1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut‐off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut‐off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method.  相似文献   

17.
Estimating the evolutionary potential of quantitative traits and reliably predicting responses to selection in wild populations are important challenges in evolutionary biology. The genomic revolution has opened up opportunities for measuring relatedness among individuals with precision, enabling pedigree‐free estimation of trait heritabilities in wild populations. However, until now, most quantitative genetic studies based on a genomic relatedness matrix (GRM) have focused on long‐term monitored populations for which traditional pedigrees were also available, and have often had access to knowledge of genome sequence and variability. Here, we investigated the potential of RAD‐sequencing for estimating heritability in a free‐ranging roe deer (Capreolous capreolus) population for which no prior genomic resources were available. We propose a step‐by‐step analytical framework to optimize the quality and quantity of the genomic data and explore the impact of the single nucleotide polymorphism (SNP) calling and filtering processes on the GRM structure and GRM‐based heritability estimates. As expected, our results show that sequence coverage strongly affects the number of recovered loci, the genotyping error rate and the amount of missing data. Ultimately, this had little effect on heritability estimates and their standard errors, provided that the GRM was built from a minimum number of loci (above 7,000). Genomic relatedness matrix‐based heritability estimates thus appear robust to a moderate level of genotyping errors in the SNP data set. We also showed that quality filters, such as the removal of low‐frequency variants, affect the relatedness structure of the GRM, generating lower h2 estimates. Our work illustrates the huge potential of RAD‐sequencing for estimating GRM‐based heritability in virtually any natural population.  相似文献   

18.
Errors‐in‐variables models in high‐dimensional settings pose two challenges in application. First, the number of observed covariates is larger than the sample size, while only a small number of covariates are true predictors under an assumption of model sparsity. Second, the presence of measurement error can result in severely biased parameter estimates, and also affects the ability of penalized methods such as the lasso to recover the true sparsity pattern. A new estimation procedure called SIMulation‐SELection‐EXtrapolation (SIMSELEX) is proposed. This procedure makes double use of lasso methodology. First, the lasso is used to estimate sparse solutions in the simulation step, after which a group lasso is implemented to do variable selection. The SIMSELEX estimator is shown to perform well in variable selection, and has significantly lower estimation error than naive estimators that ignore measurement error. SIMSELEX can be applied in a variety of errors‐in‐variables settings, including linear models, generalized linear models, and Cox survival models. It is furthermore shown in the Supporting Information how SIMSELEX can be applied to spline‐based regression models. A simulation study is conducted to compare the SIMSELEX estimators to existing methods in the linear and logistic model settings, and to evaluate performance compared to naive methods in the Cox and spline models. Finally, the method is used to analyze a microarray dataset that contains gene expression measurements of favorable histology Wilms tumors.  相似文献   

19.
Objective: To revisit cut‐off values of BMI, waist circumference (WC), and waist‐to‐stature ratio (WSR) based on their association with cardiorespiratory fitness (CRF). The derived cut‐off points were compared with current values (BMI, 25.0 kg/m2; WC, 80 cm) as recommended by the World Health Organization. Research Methods and Procedures: Anthropometric indices were measured in a cross sectional study of 358 Singaporean female employees of a large tertiary hospital (63% Singaporean Chinese, 28% Malays, and 9% Indians). CRF was determined by the 1‐mile walk test. Receiver operating characteristic curves were constructed to determine cut‐off points. Results: The cut‐off points for BMI, WC, and WSR were 23.6 kg/m2, 75.3 cm, and 0.48, respectively. The areas under the curve of BMI, WC, and WSR were 0.68, 0.74, and 0.74, respectively. For a given BMI, women with low CRF had higher WSR compared with women with high CRF. Discussion: These findings provide convergent evidence that the cut‐off points for Singaporean women were lower than the World Health Organization's criteria but were in good agreement with those reported for Asians.  相似文献   

20.
Data (un)availability and uncertainty are recurring problems in life cycle assessment, and particularly inventory analysis. Advances in life cycle inventory have focused on the propagation and management of uncertainty, but this article addresses the question of how to account for unavailable data and corresponding uncertainty. Large and complicated systems often lack complete data due to confidential practices or the efforts required in the data collection process. Electricity production with multiple processes generating a single product is a classic example. Instead of the conventional process‐based models to estimate missing data, the approach developed in this article divides systems based on functionally equivalent objects. Each one of these objects is then described in terms of characteristic variables, such as power capacity. Kriging, a flexible statistical estimator, allows for the estimation of unknown material and energy flows based on the objects’ characteristic variables. Both univariate and multivariate kriging are tested and compared to regression analysis. It is found that kriging performs better than linear regression, according to the mean absolute error criterion. Multivariate kriging provides an even more accurate joint estimation method to bridge data gaps scattered across inventories and when observable values of material and energy flows differ from one object to the next. Parameters of the underlying models are interpreted in terms of data uncertainty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号