首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Process life cycle assessment (PLCA) is widely used to quantify environmental flows associated with the manufacturing of products and other processes. As PLCA always depends on defining a system boundary, its application involves truncation errors. Different methods of estimating truncation errors are proposed in the literature; most of these are based on artificially constructed system complete counterfactuals. In this article, we review the literature on truncation errors and their estimates and systematically explore factors that influence truncation error estimates. We classify estimation approaches, together with underlying factors influencing estimation results according to where in the estimation procedure they occur. By contrasting different PLCA truncation/error modeling frameworks using the same underlying input‐output (I‐O) data set and varying cut‐off criteria, we show that modeling choices can significantly influence estimates for PLCA truncation errors. In addition, we find that differences in I‐O and process inventory databases, such as missing service sector activities, can significantly affect estimates of PLCA truncation errors. Our results expose the challenges related to explicit statements on the magnitude of PLCA truncation errors. They also indicate that increasing the strictness of cut‐off criteria in PLCA has only limited influence on the resulting truncation errors. We conclude that applying an additional I‐O life cycle assessment or a path exchange hybrid life cycle assessment to identify where significant contributions are located in upstream layers could significantly reduce PLCA truncation errors.  相似文献   

2.
Two common problems in computer simulations are the decisions to ignore or include a particular element of a system under study in a model and the choice of an appropriate integration algorithm. To examine aspects of these problems, a simple exponential system is considered in which a large simulation error is induced by a rather small truncation error. The effect of computational precision, step size and hardware selection on this error is examined at standard and extended precisions over a range of step sizes and on a variety of computers. For this model, simulation accuracy is an exponential function of the number of bits in the mantissa of the computer word. Optimal step size is a function of accuracy required and precision used; a trade-off between truncation and round-off errors becomes important as accuracy requirements increase. Machine selection is important primarily in economic terms if the required precision is available. We conclude that the effect on a simulation of small terms such as truncation errors can be unexpectedly large, that solutions should always be checked, and that high precision and wide dynamic range are important to the successful computer simulation of models such as that examined.  相似文献   

3.
We present an input-output analysis of the life-cycle labor, land, and greenhouse gas (GHG) requirements of alternative options for three case studies: investing money in a new vehicle versus in repairs of an existing vehicle (labor), passenger transport modes for a trip between Sydney and Melbourne (land use), and renewable electricity generation (GHG emissions). These case studies were chosen to demonstrate the possibility of rank crossovers in life-cycle inventory (LCI) results as system boundaries are expanded and upstream production inputs are taken into account. They demonstrate that differential convergence can cause crossovers in the ranking of inventories for alternative functional units occurring at second-and higher-order upstream production layers. These production layers are often excluded in conventional process-type life-cycle assessment (LCA) by the delineation of a finite system boundary, leading to a systematic truncation error within the LCI. The exclusion of higher-order upstream inputs can be responsible for ranking crossovers going unnoticed. In this case, an incomplete conventional process-type LCA of two alternative options can result in preferences and recommendations to decision makers that are different from preferences and recommendations concluded from a complete hybrid input-output-based assessment. Therefore, the need to avoid misleading effects on the ranking of alternative functional units due to differential convergence supports the practice of hybrid input-output-based LCA techniques.  相似文献   

4.
Shale gas fracturing is a complex system of continuous operation. If human errors occur, it will cause a chain reaction, from abnormal events to fracturing accidents, and even lead to abandonment of shale gas wells. The process of shale gas fracturing has many production stages that are complex and the consequence of any error is serious. The human error modes in shale gas fracturing process are mutative. Therefore, human error should be studied in a systematic way, and in a hybrid framework, that is, whole integration of identification, prioritization, reasoning, and control. A new structured identification method of human error in a hybrid framework for shale gas fracturing operation is presented in the article. First, human error is structurally identified based on the human HAZOP method. Second, fuzzy VIKOR method is applied to comprehensive prioritization. Finally, 4M element theory is used to analyze the human error and control its evolution. The method improves the consistency of the identification results through the standard identification step and the identification criterion. Results from a study of feed-flow process indicate that 34 kinds of human errors can be identified, and high-probability errors occur in the behavior of implementation and observation.  相似文献   

5.
High levels of translational errors, both truncation and misincorporation in an Fc‐fusion protein were observed. Here, we demonstrate the impact of several commercially available codon optimization services, and compare to a targeted strategy. Using the targeted strategy, only codons known to have translational errors are modified. For an Fc‐fusion protein expressed in Escherichia coli, the targeted strategy, in combination with appropriate fermentation conditions, virtually eliminated misincorporation (proteins produced with a wrong amino acid sequence), and reduced the level of truncation. The use of full optimization using commercially available strategies reduced the initial errors, but introduced different misincorporations. However, truncation was higher using the targeted strategy than for most of the full optimization strategies. This targeted approach, along with monitoring of translation fidelity and careful attention to fermentation conditions is key to minimizing translational error and ensuring high‐quality expression. These findings should be useful for other biopharmaceutical products, as well as any other transgenic constructs where protein quality is important. Biotechnol. Bioeng. 2012; 109: 2770–2777. © 2012 Wiley Periodicals, Inc.  相似文献   

6.
Complex life cycles are a hallmark of parasitic trematodes. In several trematode taxa, however, the life cycle is truncated: fewer hosts are used than in a typical three-host cycle, with fewer transmission events. Eliminating one host from the life cycle can be achieved in at least three different ways. Some trematodes show even more extreme forms of life cycle abbreviations, using only a mollusc to complete their cycle, with or without sexual reproduction. The occurrence of these phenomena among trematode families are reviewed here and show that life cycle truncation has evolved independently many times in the phylogeny of trematodes. The hypotheses proposed to account for life-cycle truncation, in addition to the factors preventing the adoption of shorter cycles by all trematodes are also discussed. The study of shorter life cycles offers an opportunity to understand the forces shaping the evolution of life cycles in general.  相似文献   

7.
Genotypic errors, whether due to mutation or laboratory error, can cause the genotypes of parents and their offspring to appear inconsistent with Mendelian inheritance. As a result, molecular parentage analyses are expected to benefit when allowances are made for the presence of genotypic errors. However, a cost of allowing for genotypic errors might also be expected under some analytical conditions, primarily because parentage analyses that assume nonzero genotypic error rates can neither assign nor exclude parentage with certainty. The goal of this work was therefore to determine whether or not such costs might be important under conditions relevant to parentage analyses, particularly in natural populations. Simulation results indicate that the costs may often outweigh the benefits of accounting for nonzero error rates, except in situations where data are available for many marker loci. Consequently, the most powerful approach to handling genotypic errors in parentage analyses might be to apply likelihood equations with error rates set to values substantially lower than the rates at which genotypic errors occur. When applying molecular parentage analyses to natural populations, we advocate an increased consideration of optimal strategies for handling genotypic errors. Currently available software packages contain procedures that can be used for this purpose.  相似文献   

8.
Abstract: Obtaining reliable results from life-cycle assessment studies is often quite difficult because life-cycle inventory (LCI) data are usually erroneous, incomplete, and even physically meaningless. The real data must satisfy the laws of thermodynamics, so the quality of LCI data may be enhanced by adjusting them to satisfy these laws. This is not a new idea, but a formal thermodynamically sound and statistically rigorous approach for accomplishing this task is not yet available. This article proposes such an approach based on methods for data rectification developed in process systems engineering. This approach exploits redundancy in the available data and models and solves a constrained optimization problem to remove random errors and estimate some missing values. The quality of the results and presence of gross errors are determined by statistical tests on the constraints and measurements. The accuracy of the rectified data is strongly dependent on the accuracy and completeness of the available models, which should capture information such as the life-cycle network, stream compositions, and reactions. Such models are often not provided in LCI databases, so the proposed approach tackles many new challenges that are not encountered in process data rectification. An iterative approach is developed that relies on increasingly detailed information about the life-cycle processes from the user. A comprehensive application of the method to the chlor-alkali inventory being compiled by the National Renewable Energy Laboratory demonstrates the benefits and challenges of this approach.  相似文献   

9.
The purpose of this paper is to describe how one pollution prevention tool, life-cycle assessment, can be used to identify and manage environmental issues associated with product systems. Specifically, this paper will describe what life-cycle assessment is, determine the key players in its development and application, and present ideas on how life-cycle assessment can be used today. LCA provides a systematic means to broaden the perspective of a company's decisionmaking process to incorporate the consideration of energy and material use, transportation, post-customer use, and disposal, and the environmental releases associated with the product system. LCA provides a framework to achieve a better understanding of the trade-offs associated with specific change in a product, package, or process. This understanding lays the foundation for subsequent risk assessments and risk management efforts by decision-makers.  相似文献   

10.
Likelihood ratio tests are derived for bivariate normal structural relationships in the presence of group structure. These tests may also be applied to less restrictive models where only errors are assumed to be normally distributed. Tests for a common slope amongst those from several datasets are derived for three different cases – when the assumed ratio of error variances is the same across datasets and either known or unknown, and when the standardised major axis model is used. Estimation of the slope in the case where the ratio of error variances is unknown could be considered as a maximum likelihood grouping method. The derivations are accompanied by some small sample simulations, and the tests are applied to data arising from work on seed allometry.  相似文献   

11.
Summary The standard estimator for the cause‐specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is a weighted empirical cumulative distribution function and the other a product‐limit estimator. This equivalence suggests an alternative view of the analysis of time‐to‐event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause‐specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause‐specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time‐dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non‐AIDS related cumulative mortality.  相似文献   

12.
For surface fluxes of carbon dioxide, the net daily flux is the sum of daytime and nighttime fluxes of approximately the same magnitude and opposite direction. The net flux is therefore significantly smaller than the individual flux measurements and error assessment is critical in determining whether a surface is a net source or sink of carbon dioxide. For carbon dioxide flux measurements, it is an occasional misconception that the net flux is measured as the difference between the net upward and downward fluxes (i.e. a small difference between large terms). This is not the case. The net flux is the sum of individual (half-hourly or hourly) flux measurements, each with an associated error term. The question of errors and uncertainties in long-term flux measurements of carbon and water is addressed by first considering the potential for errors in flux measuring systems in general and thus errors which are relevant to a wide range of timescales of measurement. We also focus exclusively on flux measurements made by the micrometeorological method of eddy covariance. Errors can loosely be divided into random errors and systematic errors, although in reality any particular error may be a combination of both types. Systematic errors can be fully systematic errors (errors that apply on all of the daily cycle) or selectively systematic errors (errors that apply to only part of the daily cycle), which have very different effects. Random errors may also be full or selective, but these do not differ substantially in their properties. We describe an error analysis in which these three different types of error are applied to a long-term dataset to discover how errors may propagate through long-term data and which can be used to estimate the range of uncertainty in the reported sink strength of the particular ecosystem studied.  相似文献   

13.
A study including eight microsatellite loci for 1,014 trees from seven mapped stands of the partially clonal Populus euphratica was used to demonstrate how genotyping errors influence estimates of clonality. With a threshold of 0 (identical multilocus genotypes constitute one clone) we identified 602 genotypes. A threshold of 1 (compensating for an error in one allele) lowered this number to 563. Genotyping errors can seemingly merge (type 1 error), split really existing clones (type 2), or convert a unique genotype into another unique genotype (type 3). We used context information (sex and spatial position) to estimate the type 1 error. For thresholds of 0 and 1 the estimate was below 0.021, suggesting a high resolution for the marker system. The rate of genotyping errors was estimated by repeated genotyping for a cohort of 41 trees drawn at random (0.158), and a second cohort of 40 trees deviating in one allele from another tree (0.368). For the latter cohort, most of these deviations turned out to be errors, but 8 out of 602 obtained multilocus genotypes may represent somatic mutations, corresponding to a mutation rate of 0.013. A simulation of genotyping errors for populations with varying clonality and evenness showed the number of genotypes always to be overestimated for a system with high resolution, and this mistake increases with increasing clonality and evenness. Allowing a threshold of 1 compensates for most genotyping errors and leads to much more precise estimates of clonality compared with a threshold of 0. This lowers the resolution of the marker system, but comparison with context information can help to check if the resolution is sufficient to apply a higher threshold. We recommend simulation procedures to investigate the behavior of a marker system for different thresholds and error rates to obtain the best estimate of clonality.  相似文献   

14.
Sequencing reduced‐representation libraries of restriction site‐associated DNA (RADseq) to identify single nucleotide polymorphisms (SNPs) is quickly becoming a standard methodology for molecular ecologists. Because of the scale of RADseq data sets, putative loci cannot be assessed individually, making the process of filtering noise and correctly identifying biologically meaningful signal more difficult. Artefacts introduced during library preparation and/or bioinformatic processing of SNP data can create patterns that are incorrectly interpreted as indicative of population structure or natural selection. Therefore, it is crucial to carefully consider types of errors that may be introduced during laboratory work and data processing, and how to minimize, detect and remove these errors. Here, we discuss issues inherent to RADseq methodologies that can result in artefacts during library preparation and locus reconstruction resulting in erroneous SNP calls and, ultimately, genotyping error. Further, we describe steps that can be implemented to create a rigorously filtered data set consisting of markers accurately representing independent loci and compare the effect of different combinations of filters on four RAD data sets. At last, we stress the importance of publishing raw sequence data along with final filtered data sets in addition to detailed documentation of filtering steps and quality control measures.  相似文献   

15.
Hybrid Framework for Managing Uncertainty in Life Cycle Inventories   总被引:1,自引:0,他引:1  
Life cycle assessment (LCA) is increasingly being used to inform decisions related to environmental technologies and polices, such as carbon footprinting and labeling, national emission inventories, and appliance standards. However, LCA studies of the same product or service often yield very different results, affecting the perception of LCA as a reliable decision tool. This does not imply that LCA is intrinsically unreliable; we argue instead that future development of LCA requires that much more attention be paid to assessing and managing uncertainties. In this article we review past efforts to manage uncertainty and propose a hybrid approach combining process and economic input–output (I‐O) approaches to uncertainty analysis of life cycle inventories (LCI). Different categories of uncertainty are sometimes not tractable to analysis within a given model framework but can be estimated from another perspective. For instance, cutoff or truncation error induced by some processes not being included in a bottom‐up process model can be estimated via a top‐down approach such as the economic I‐O model. A categorization of uncertainty types is presented (data, cutoff, aggregation, temporal, geographic) with a quantitative discussion of methods for evaluation, particularly for assessing temporal uncertainty. A long‐term vision for LCI is proposed in which hybrid methods are employed to quantitatively estimate different uncertainty types, which are then reduced through an iterative refinement of the hybrid LCI method.  相似文献   

16.
17.
Satellite telemetry using ARGOS platform transmitter terminals (PTTs) is widely used to track the movements of animals, but little is known of the accuracy of these systems when used on active terrestrial mammals. An accurate estimate of the error, and therefore the limitations of the data, is critical when assessing the level of confidence in results. ARGOS provides published 68th percentile error estimates for the three most accurate location classes (LCs), but studies have shown that the errors can be far greater when the devices are attached to free‐living animals. Here we use data from a study looking at the habitat use of the spectacled flying‐fox in the wet tropics of Queensland to calculate these errors for all LCs in free‐living terrestrial mammals, and use these results to assess what level of confidence we would have in habitat use assignment in the study area. The results showed that our calculated 68th percentile errors were larger than the published ARGOS errors for all LCs, and that for all classes the error frequency had a very long tail. Habitat use results showed that the size of the error compared with the scale of the habitat the study was conducted in makes it unlikely that our data can be used to assess habitat use with great confidence. Overall, our results show that while satellite telemetry results are useful for assessing large scale movements of animals, in complex landscapes they may not be accurate enough to be used for finer scale analysis including habitat use assessment.  相似文献   

18.

Background

Evaluating the significance for a group of genes or proteins in a pathway or biological process for a disease could help researchers understand the mechanism of the disease. For example, identifying related pathways or gene functions for chromatin states of tumor-specific T cells will help determine whether T cells could reprogram or not, and further help design the cancer treatment strategy. Some existing p-value combination methods can be used in this scenario. However, these methods suffer from different disadvantages, and thus it is still challenging to design more powerful and robust statistical method.

Results

The existing method of Group combined p-value (GCP) first partitions p-values to several groups using a set of several truncation points, but the method is often sensitive to these truncation points. Another method of adaptive rank truncated product method(ARTP) makes use of multiple truncation integers to adaptively combine the smallest p-values, but the method loses statistical power since it ignores the larger p-values. To tackle these problems, we propose a robust p-value combination method (rPCMP) by considering multiple partitions of p-values with different sets of truncation points. The proposed rPCMP statistic have a three-layer hierarchical structure. The inner-layer considers a statistic which combines p-values in a specified interval defined by two thresholds points, the intermediate-layer uses a GCP statistic which optimizes the statistic from the inner layer for a partition set of threshold points, and the outer-layer integrates the GCP statistic from multiple partitions of p-values. The empirical distribution of statistic under null distribution could be estimated by permutation procedure.

Conclusions

Our proposed rPCMP method has been shown to be more robust and have higher statistical power. Simulation study shows that our method can effectively control the type I error rates and have higher statistical power than the existing methods. We finally apply our rPCMP method to an ATAC-seq dataset for discovering the related gene functions with chromatin states in mouse tumors T cell.
  相似文献   

19.
Pseudo‐observations have been introduced as a way to perform regression analysis of a mean value parameter related to a right‐censored time‐to‐event outcome, such as the survival probability or the restricted mean survival time. Since the introduction of the approach there have been several extensions from the original setting. However, the proper definition and performance of pseudo‐observations under left‐truncation has not yet been addressed. Here, we look at two types of pseudo‐observations under right‐censoring and left‐truncation. We explored their performance in a simulation study and applied them to data on diabetes patients with left‐truncation.  相似文献   

20.
Binding constant data K degrees (T) are commonly subjected to van't Hoff analysis to extract estimates of DeltaH degrees, DeltaS degrees, and DeltaCP degrees for the process in question. When such analyses employ unweighted least-squares fitting of lnK degrees to an appropriate function of the temperature T, they are tacitly assuming constant relative error in K degrees. When this assumption is correct, the statistical errors in DeltaG degrees, DeltaH degrees, DeltaS degrees, DeltaCP degrees, and the T-derivative of DeltaCP degrees (if determined) are all independent of the actual values of K degrees and can be computed from knowledge of just the T values at which K degrees is known and the percent error in K degrees. All of these statistical errors except that for the highest-order constant are functions of T, so they must normally be calculated using a form of the error propagation equation that is not widely known. However, this computation can be bypassed by defining DeltaH degrees as a polynomial in (T-T0), the coefficients of which thus become DeltaH degrees, DeltaCP degrees, and 1/2 dDeltaCP degrees/dT at T=T0. The errors in the key quantities can then be computed by just repeating the fit for different T0. Procedures for doing this are described for a representative data analysis program. Results of such calculations show that expanding the T range from 10-40 to 5-45 degrees C gives significant improvement in the precision of all quantities. DeltaG degrees is typically determined with standard error a factor of approximately 30 smaller than that for DeltaH degrees. Accordingly, the error in TDeltaS degrees is nearly identical to that in DeltaH degrees. For 4% error in K degrees, the T-derivative in DeltaCP degrees cannot be determined unless it is approximately 10 cal mol-1 K-2 or greater; and DeltaCP degrees must be approximately 50 cal mol-1 K-1. Since all errors scale with the data error and inversely with the square root of the number of data points, the present results for 4% error cover any other relative error and number of points, for the same approximate T structure of the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号