首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Purpose

Quantitative uncertainties are a direct consequence of averaging, a common procedure when building life cycle inventories (LCIs). This averaging can be amongst locations, times, products, scales or production technologies. To date, however, quantified uncertainties at the unit process level have largely been generated using a Numerical Unit Spread Assessment Pedigree (NUSAP) approach and often disregard inherent uncertainties (inaccurate measurements) and spread (variability around means).

Methods

A decision tree for primary and secondary data at the unit process level was initially created. Around this decision tree, a protocol was developed with the recognition that dispersions can be either results of inherent uncertainty, spread amongst data points or products of unrepresentative data. In order to estimate the characteristics of uncertainties for secondary data, a method for weighting means amongst studies is proposed. As for unrepresentativeness, the origin and adaptation of NUSAP to the field of life cycle assessment are discussed, and recommendations are given.

Results and discussion

By using the proposed protocol, cross-referencing of outdated data is avoided, and user influence on results is reduced. In the meantime, more accurate estimates can be made for horizontally averaged data with accompanying spread and inherent uncertainties, as these deviations often contribute substantially towards the overall dispersion.

Conclusions

In this article, we highlight the importance of including inherent uncertainties and spread alongside the NUSAP pedigree. As uncertainty data often are missing in LCI literature, we here describe a method for evaluating these by taking several reported values into account. While this protocol presents a practical way towards estimating overall dispersion, better reporting in literature is promoted in order to determine real uncertainty parameters.  相似文献   

2.
3.

Purpose

Life cycle costing (LCC) is a state-of-the-art method to analyze investment decisions in infrastructure projects. However, uncertainties inherent in long-term planning question the credibility of LCC results. Previous research has not systematically linked sources and methods to address this uncertainty. Part I of this series develops a framework to collect and categorize different sources of uncertainty and addressing methods. This systematization is a prerequisite to further analyze the suitability of methods and levels the playing field for part II.

Methods

Past reviews have dealt with selected issues of uncertainty in LCC. However, none has systematically collected uncertainties and linked methods to address them. No comprehensive categorization has been published to date. Part I addresses these two research gaps by conducting a systematic literature review. In a rigorous four-step approach, we first scrutinized major databases. Second, we performed a practical and methodological screening to identify in total 115 relevant publications, mostly case studies. Third, we applied content analysis using MAXQDA. Fourth, we illustrated results and concluded upon the research gaps.

Results and discussion

We identified 33 sources of uncertainty and 24 addressing methods. Sources of uncertainties were categorized according to (i) its origin, i.e., parameter, model, and scenario uncertainty and (ii) the nature of uncertainty, i.e., aleatoric or epistemic uncertainty. The methods to address uncertainties were classified into deterministic, probabilistic, possibilistic, and other methods. With regard to sources of uncertainties, lack of data and data quality was analyzed most often. Most uncertainties having been discussed were located in the use stage. With regard to methods, sensitivity analyses were applied most widely, while more complex methods such as Bayesian models were used less frequently. Data availability and the individual expertise of LCC practitioner foremost influence the selection of methods.

Conclusions

This article complements existing research by providing a thorough systematization of uncertainties in LCC. However, an unambiguous categorization of uncertainties is difficult and overlapping occurs. Such a systemizing approach is nevertheless necessary for further analyses and levels the playing field for readers not yet familiar with the topic. Part I concludes the following: First, an investigation about which methods are best suited to address a certain type of uncertainty is still outstanding. Second, an analysis of types of uncertainty that have been insufficiently addressed in previous LCC cases is still missing. Part II will focus on these research gaps.
  相似文献   

4.

Background  

DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data.  相似文献   

5.

Background  

Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable.  相似文献   

6.

Background, aim, and scope  

Propagation of parametric uncertainty in life cycle inventory (LCI) models is usually performed based on probabilistic Monte Carlo techniques. However, alternative approaches using interval or fuzzy numbers have been proposed based on the argument that these provide a better reflection of epistemological uncertainties inherent in some process data. Recent progress has been made to integrate fuzzy arithmetic into matrix-based LCI using decomposition into α-cut intervals. However, the proposed technique implicitly assumes that the lower bounds of the technology matrix elements give the highest inventory results, and vice versa, without providing rigorous proof.  相似文献   

7.

Background  

With the availability of the Affymetrix exon arrays a number of tools have been developed to enable the analysis. These however can be expensive or have several pre-installation requirements. This led us to develop an analysis workflow for analysing differential splicing using freely available software packages that are already being widely used for gene expression analysis. The workflow uses the packages in the standard installation of R and Bioconductor (BiocLite) to identify differential splicing. We use the splice index method with the LIMMA framework. The main drawback with this approach is that it relies on accurate estimates of gene expression from the probe-level data. Methods such as RMA and PLIER may misestimate when a large proportion of exons are spliced. We therefore present the novel concept of a gene correlation coefficient calculated using only the probeset expression pattern within a gene. We show that genes with lower correlation coefficients are likely to be differentially spliced.  相似文献   

8.

Background  

Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills.  相似文献   

9.

Background  

Gene expression analyses based on complex hybridization measurements have increased rapidly in recent years and have given rise to a huge amount of bioinformatic tools such as image analyses and cluster analyses. However, the amount of work done to integrate and evaluate these tools and the corresponding experimental procedures is not high. Although complex hybridization experiments are based on a data production pipeline that incorporates a significant amount of error parameters, the evaluation of these parameters has not been studied yet in sufficient detail.  相似文献   

10.

Background  

Nonnegative Matrix Factorization (NMF) is an unsupervised learning technique that has been applied successfully in several fields, including signal processing, face recognition and text mining. Recent applications of NMF in bioinformatics have demonstrated its ability to extract meaningful information from high-dimensional data such as gene expression microarrays. Developments in NMF theory and applications have resulted in a variety of algorithms and methods. However, most NMF implementations have been on commercial platforms, while those that are freely available typically require programming skills. This limits their use by the wider research community.  相似文献   

11.

Purpose

Life cycle inventory (LCI) databases provide generic data on exchange values associated with unit processes. The “ecoinvent” LCI database estimates the uncertainty of all exchange values through the application of the so-called pedigree approach. In the first release of the database, the used uncertainty factors were based on experts’ judgments. In 2013, Ciroth et al. derived empirically based factors. These, however, assumed that the same uncertainty factors could be used for all industrial sectors and fell short of providing basic uncertainty factors. The work presented here aims to overcome these limitations.

Methods

The proposed methodological framework is based on the assessment of more than 60 data sources (23,200 data points) and the use of Bayesian inference. Using Bayesian inference allows an update of uncertainty factors by systematically combining experts’ judgments and other information we already have about the uncertainty factors with new data.

Results and discussion

The implementation of the methodology over the data sources results in the definition of new uncertainty factors for all additional uncertainty indicators and for some specific industrial sectors. It also results in the definition of some basic uncertainty factors. In general, the factors obtained are higher than the ones obtained in previous work, which suggests that the experts had initially underestimated uncertainty. Furthermore, the presented methodology can be applied to update uncertainty factors as new data become available.

Conclusions

In practice, these uncertainty factors can systematically be incorporated in LCI databases as estimates of exchange value uncertainty where more formal uncertainty information is not available. The use of Bayesian inference is applied here to update uncertainty factors but can also be used in other life cycle assessment developments in order to improve experts’ judgments or to update parameter values when new data can be accessed.
  相似文献   

12.

Background  

Numerous nonparametric approaches have been proposed in literature to detect differential gene expression in the setting of two user-defined groups. However, there is a lack of nonparametric procedures to analyze microarray data with multiple factors attributing to the gene expression. Furthermore, incorporating interaction effects in the analysis of microarray data has long been of great interest to biological scientists, little of which has been investigated in the nonparametric framework.  相似文献   

13.

Background  

Large discrepancies in signature composition and outcome concordance have been observed between different microarray breast cancer expression profiling studies. This is often ascribed to differences in array platform as well as biological variability. We conjecture that other reasons for the observed discrepancies are the measurement error associated with each feature and the choice of preprocessing method. Microarray data are known to be subject to technical variation and the confidence intervals around individual point estimates of expression levels can be wide. Furthermore, the estimated expression values also vary depending on the selected preprocessing scheme. In microarray breast cancer classification studies, however, these two forms of feature variability are almost always ignored and hence their exact role is unclear.  相似文献   

14.

Background  

The post-genomic era has brought new challenges regarding the understanding of the organization and function of the human genome. Many of these challenges are centered on the meaning of differential gene regulation under distinct biological conditions and can be performed by analyzing the Multiple Differential Expression (MDE) of genes associated with normal and abnormal biological processes. Currently MDE analyses are limited to usual methods of differential expression initially designed for paired analysis.  相似文献   

15.

Background  

As real-time quantitative PCR (RT-QPCR) is increasingly being relied upon for the enforcement of legislation and regulations dependent upon the trace detection of DNA, focus has increased on the quality issues related to the technique. Recent work has focused on the identification of factors that contribute towards significant measurement uncertainty in the real-time quantitative PCR technique, through investigation of the experimental design and operating procedure. However, measurement uncertainty contributions made during the data analysis procedure have not been studied in detail. This paper presents two additional approaches for standardising data analysis through the novel application of statistical methods to RT-QPCR, in order to minimise potential uncertainty in results.  相似文献   

16.
17.

Background  

A central task in contemporary biosciences is the identification of biological processes showing response in genome-wide differential gene expression experiments. Two types of analysis are common. Either, one generates an ordered list based on the differential expression values of the probed genes and examines the tail areas of the list for over-representation of various functional classes. Alternatively, one monitors the average differential expression level of genes belonging to a given functional class. So far these two types of method have not been combined.  相似文献   

18.

Background  

In testing for differential gene expression involving multiple serial analysis of gene expression (SAGE) libraries, it is critical to account for both between and within library variation. Several methods have been proposed, including the t test, t w test, and an overdispersed logistic regression approach. The merits of these tests, however, have not been fully evaluated. Questions still remain on whether further improvements can be made.  相似文献   

19.

Background, aim, and scope

Many studies evaluate the results of applying different life cycle impact assessment (LCIA) methods to the same life cycle inventory (LCI) data and demonstrate that the assessment results would be different with different LICA methods used. Although the importance of uncertainty is recognized, most studies focus on individual stages of LCA, such as LCI and normalization and weighting stages of LCIA. However, an important question has not been answered in previous studies: Which part of the LCA processes will lead to the primary uncertainty? The understanding of the uncertainty contributions of each of the LCA components will facilitate the improvement of the credibility of LCA.

Methodology

A methodology is proposed to systematically analyze the uncertainties involved in the entire procedure of LCA. The Monte Carlo simulation is used to analyze the uncertainties associated with LCI, LCIA, and the normalization and weighting processes. Five LCIA methods are considered in this study, i.e., Eco-indicator 99, EDIP, EPS, IMPACT 2002+, and LIME. The uncertainty of the environmental performance for individual impact categories (e.g., global warming, ecotoxicity, acidification, eutrophication, photochemical smog, human health) is also calculated and compared. The LCA of municipal solid waste management strategies in Taiwan is used as a case study to illustrate the proposed methodology.

Results

The primary uncertainty source in the case study is the LCI stage under a given LCIA method. In comparison with various LCIA methods, EDIP has the highest uncertainty and Eco-indicator 99 the lowest uncertainty. Setting aside the uncertainty caused by LCI, the weighting step has higher uncertainty than the normalization step when Eco-indicator 99 is used. Comparing the uncertainty of various impact categories, the lowest is global warming, followed by eutrophication. Ecotoxicity, human health, and photochemical smog have higher uncertainty.

Discussion

In this case study of municipal waste management, it is confirmed that different LCIA methods would generate different assessment results. In other words, selection of LCIA methods is an important source of uncertainty. In this study, the impacts of human health, ecotoxicity, and photochemical smog can vary a lot when the uncertainties of LCI and LCIA procedures are considered. For the purpose of reducing the errors of impact estimation because of geographic differences, it is important to determine whether and which modifications of assessment of impact categories based on local conditions are necessary.

Conclusions

This study develops a methodology of systematically evaluating the uncertainties involved in the entire LCA procedure to identify the contributions of different assessment stages to the overall uncertainty. Which modifications of the assessment of impact categories are needed can be determined based on the comparison of uncertainty of impact categories.

Recommendations and perspectives

Such an assessment of the system uncertainty of LCA will facilitate the improvement of LCA. If the main source of uncertainty is the LCI stage, the researchers should focus on the data quality of the LCI data. If the primary source of uncertainty is the LCIA stage, direct application of LCIA to non-LCIA software developing nations should be avoided.  相似文献   

20.

Background  

Although the use of clustering methods has rapidly become one of the standard computational approaches in the literature of microarray gene expression data analysis, little attention has been paid to uncertainty in the results obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号