首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
It has recently been acknowledged that the quality of data used in Life Cycle Assessment (LCA) is one of the most important limiting factors to the application of the methodology. Early approaches dealing with this problem solely based on Data Quality Indicators (DQI) have revealed their limitations, and stochastic models are increasingly proposed as an alternative. Although facing methodological and practical difficulties, for instance the characterization of the distribution of input data, these stochastic models can significantly enhance decision-making in LCA. Uncertainty and data quality, however, are two distinct attributes. No matter how sophisticated the stochastic models are, they do not address the issue of the adequacy of the data used with regard to the goal of the study. Actual data on the distribution of SO emissions for US coal fired power plants for instance, would be of low quality for a European study. It is therefore believed that mixed approaches DQI/stochastic models should be developed in the future.  相似文献   

4.
This protocol details the steps for data quality assessment and control that are typically carried out during case-control association studies. The steps described involve the identification and removal of DNA samples and markers that introduce bias. These critical steps are paramount to the success of a case-control study and are necessary before statistically testing for association. We describe how to use PLINK, a tool for handling SNP data, to perform assessments of failure rate per individual and per SNP and to assess the degree of relatedness between individuals. We also detail other quality-control procedures, including the use of SMARTPCA software for the identification of ancestral outliers. These platforms were selected because they are user-friendly, widely used and computationally efficient. Steps needed to detect and establish a disease association using case-control data are not discussed here. Issues concerning study design and marker selection in case-control studies have been discussed in our earlier protocols. This protocol, which is routinely used in our labs, should take approximately 8 h to complete.  相似文献   

5.
6.
Attitudinal questions are widely applied in the statistical questionnaire surveys, but the reliability of answers, affected by the psychological tendency of informants, is in doubt for the existence of systematic psychological errors. In this case, control and experimental groups were built up in the work for the sake of investigation and analysis. As a result, it was found that the selection tendencies of systematic psychological errors were derived from the settings of questionnaire answers, and there were always some rules to be followed. On this account, the researchers of statistical surveys are required to abide by the psychological tendency laws of informants and set up statistical questionnaires scientifically and rationally. In this way, the overall quality of survey data can be enhanced.  相似文献   

7.
In an analysis of avian and mammalian thermal tolerances recently published in this journal, Khaliq et al. ( 2015 ) reported that endotherm thermal niches are phylogenetically conserved in tropical, but not temperate, regions. However, closer examination of the data upon which this analysis was based reveals that many of the upper critical temperature (UCT) data are not valid. Approximately 55% and 42% of avian and mammalian UCT data, respectively, originated from studies in which animals were not exposed to air temperatures high enough to elicit an increase in metabolic rate above minimum levels; the cited UCT values are merely the highest air temperatures at which measurements took place. An additional 18% and 25% of avian and mammalian UCT data, respectively, represent values based on just one individual per species and/or measurements at too few air temperatures above the thermoneutral zone (TNZ) to reliably estimate the UCT.  相似文献   

8.
9.
10.

Aim

To assess the comparability of five performance indicator scores for treatment delay among patients diagnosed with ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention in relation to the quality of the underlying data.

Methods

Secondary analyses were performed on data from 1017 patients in seven Dutch hospitals. Data were collected using standardised forms for patients discharged in 2012. Comparability was assessed as the number of occasions the indicator threshold was reached for each hospital.

Results

Hospitals recorded different time points based on different interpretations of the definitions. This led to substantial differences in indicator scores, ranging from 57 to 100 % of the indictor threshold being reached. Some hospitals recorded all the required data elements for calculating the performance indicators but none of the data elements could be retrieved in a fully automated way. Moreover, recording accessibility and completeness of time points varied widely within and between hospitals.

Conclusion

Hospitals use different definitions for treatment delay and vary greatly in the extent to which the necessary data are available, accessible and complete, impeding comparability between hospitals. Indicator developers, users and hospitals providing data should be aware of these issues and aim to improve data quality in order to facilitate comparability of performance indicators.  相似文献   

11.
  1. Download : Download high-res image (137KB)
  2. Download : Download full-size image
  相似文献   

12.
Data quality assurance and quality control are critical to the effective conduct of a clinical trial. In the present commentary, we discuss our experience in a large, multicenter stroke trial. In addition to standard data quality control techniques, we have developed novel methods to enhance the entire process. Central to our methods is the use of clinical monitors who are trained in the techniques of data monitoring.  相似文献   

13.
BACKGROUND: The recent development of semiautomated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. METHODS: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. RESULTS: We found that graphical representations can reveal substantial nonbiological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. CONCLUSIONS: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.  相似文献   

14.
As the use of RNA-seq has popularized, there is an increasing consciousness of the importance of experimental design, bias removal, accurate quantification and control of false positives for proper data analysis. We introduce the NOISeq R-package for quality control and analysis of count data. We show how the available diagnostic tools can be used to monitor quality issues, make pre-processing decisions and improve analysis. We demonstrate that the non-parametric NOISeqBIO efficiently controls false discoveries in experiments with biological replication and outperforms state-of-the-art methods. NOISeq is a comprehensive resource that meets current needs for robust data-aware analysis of RNA-seq differential expression.  相似文献   

15.
16.
AimTo provide a comprehensive evaluation of the quality of the data at the Singapore Cancer Registry (SCR).MethodsQuantitative and semi-quantitative methods were used to assess the comparability, completeness, accuracy and timeliness of data for the period of 1968–2013, with focus on the period 2008–2012.ResultsThe SCR coding and classification systems follow international standards. The overall completeness was estimated at 98.1% using the flow method and 97.5% using the capture-recapture method, for the period of 2008–2012. For the same period, 91.9% of the cases were morphologically verified (site-specific range: 40.4–100%) with 1.1% DCO cases. The under-reporting in 2011 and 2012 due to timely publication was estimated at 0.03% and 0.51% respectively.ConclusionThis review shows that the processes in place at the SCR yields data which are internationally comparable, relatively complete, valid, and timely, allowing for greater confidence in the use of quality data in the areas of cancer prevention, treatment and control.  相似文献   

17.
A shortcoming in current data quality assessment schemes is that the data quality information is not used systematically to identify the critical data in a life cycle inventory (LCI) model. In addition, existing criteria employed to evaluate representativeness lack relevance to the specific context of a study. A novel framework is proposed herein for the evaluation of the representativeness of LCI data, including an analysis of the importance of the data and a modification of quality criteria based on unit process characteristics. Temporal characteristics are analyzed by identifying the technology shift, because data generated before this time are considered outdated. Geographical and technological characteristics are analyzed by defining a “related area” and a “related technology,” which is done by identifying a number of relevant geographical and technical factors, and then comparing the collected data with these factors. The framework was illustrated in a case study on household waste incineration in Denmark. The results demonstrated the applicability of the method in practice, and they provided data quality criteria unique to waste incineration unit processes, for example, different time intervals to evaluate temporal representativeness. However, the proposed method is time demanding, and thus sector‐level characteristic analyses are feasible instead of the user having to do the analyses.  相似文献   

18.
19.
20.
Data archiving     
R Butlin 《Heredity》2011,106(5):709
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号