共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Over a few short years, microarray gene expression profiling has permeated most areas of biomedical research. Microarrays are now poised to enter the more demanding realm of clinical applications. The prospect of using microarray data to derive biomarkers of disease or toxicity, predict prognosis, or select treatments raises the validity and reliability bar substantially higher. The potential future payoffs are huge in terms of faster approval of more efficacious and safer medical interventions, and a more personalized implementation of them. Arriving at the future sooner rather than later is the motivation for the FDA-led MicroArray Quality Control (MAQC) project. The widespread collaboration aims to assess achievable technical performance of microarrays and capabilities and limitations of methods for microarray data analysis. 相似文献
4.
BiQ Analyzer: visualization and quality control for DNA methylation data from bisulfite sequencing 总被引:11,自引:0,他引:11
Bock C Reither S Mikeska T Paulsen M Walter J Lengauer T 《Bioinformatics (Oxford, England)》2005,21(21):4067-4068
SUMMARY: Manual processing of DNA methylation data from bisulfite sequencing is a tedious and error-prone task. Here we present an interactive software tool that provides start-to-end support for this process. In an easy-to-use manner, the tool helps the user to import the sequence files from the sequencer, to align them, to exclude or correct critical sequences, to document the experiment, to perform basic statistics and to produce publication-quality diagrams.Emphasis is put on quality control: The program automatically assesses data quality and provides warnings and suggestions for dealing with critical sequences. The BiQ Analyzer program is implemented in the Java programming language and runs on any platform for which a recent Java virtual machine is available. AVAILABILITY: The program is available without charge for non-commercial users and can be downloaded from http://biq-analyzer.bioinf.mpi-inf.mpg.de/ 相似文献
5.
Hummingbirds are an important model system in avian biology, but to date the group has been the subject of remarkably few phylogenetic investigations. Here we present partitioned Bayesian and maximum likelihood phylogenetic analyses for 151 of approximately 330 species of hummingbirds and 12 outgroup taxa based on two protein-coding mitochondrial genes (ND2 and ND4), flanking tRNAs, and two nuclear introns (AK1 and BFib). We analyzed these data under several partitioning strategies ranging between unpartitioned and a maximum of nine partitions. In order to select a statistically justified partitioning strategy following partitioned Bayesian analysis, we considered four alternative criteria including Bayes factors, modified versions of the Akaike information criterion for small sample sizes (AIC(c)), Bayesian information criterion (BIC), and a decision-theoretic methodology (DT). Following partitioned maximum likelihood analyses, we selected a best-fitting strategy using hierarchical likelihood ratio tests (hLRTS), the conventional AICc, BIC, and DT, concluding that the most stringent criterion, the performance-based DT, was the most appropriate methodology for selecting amongst partitioning strategies. In the context of our well-resolved and well-supported phylogenetic estimate, we consider the historical biogeography of hummingbirds using ancestral state reconstructions of (1) primary geographic region of occurrence (i.e., South America, Central America, North America, Greater Antilles, Lesser Antilles), (2) Andean or non-Andean geographic distribution, and (3) minimum elevational occurrence. These analyses indicate that the basal hummingbird assemblages originated in the lowlands of South America, that most of the principle clades of hummingbirds (all but Mountain Gems and possibly Bees) originated on this continent, and that there have been many (at least 30) independent invasions of other primary landmasses, especially Central America. 相似文献
6.
L. BELBIN 《Austral ecology》1992,17(3):255-262
Abstract Comparing a new set of samples to what may be considered a reference set is a common problem in ecology. The investigator may be interested in the degree of correspondence or any anomalies. For example, does a set of existing reserves adequately cover the range of communities sampled in a region? A technique for such comparisons is proposed. Being dependent solely on estimates of ecological resemblance, it is simple, efficient and robust. Significant difference is defined by means of a resemblance coefficient. A threshold value denoting significant difference can be defined either by species overlap or other attributes of the data. For presence/absence data the Czekanowski coefficient provides a suitable measure of ecological resemblance. Traditional discriminant analysis does not provide a viable alternative due to its limitations in accommodating ecological data. 相似文献
7.
We present MultiGO, a web-enabled tool for the identification of biologically relevant gene sets from hierarchically clustered gene expression trees (http://ekhidna.biocenter.helsinki.fi/poxo/multigo). High-throughput gene expression measuring techniques, such as microarrays, are nowadays often used to monitor the expression of thousands of genes. Since these experiments can produce overwhelming amounts of data, computational methods that assist the data analysis and interpretation are essential. MultiGO is a tool that automatically extracts the biological information for multiple clusters and determines their biological relevance, and hence facilitates the interpretation of the data. Since the entire expression tree is analysed, MultiGO is guaranteed to report all clusters that share a common enriched biological function, as defined by Gene Ontology annotations. The tool also identifies a plausible cluster set, which represents the key biological functions affected by the experiment. The performance is demonstrated by analysing drought-, cold- and abscisic acid-related expression data sets from Arabidopsis thaliana. The analysis not only identified known biological functions, but also brought into focus the less established connections to defense-related gene clusters. Thus, in comparison to analyses of manually selected gene lists, the systematic analysis of every cluster can reveal unexpected biological phenomena and produce much more comprehensive biological insights to the experiment of interest. 相似文献
8.
9.
10.
11.
MOTIVATION: Microarrays are high-throughput tools for parallel miniaturized detection of biomolecules. In contrast to experiments using ratios of signals in two channels, experiments with only one fluorescent dye cause special problems for data analysis. The present work compares algorithms for quality filtering on spot level as well as array/slide level. RESULTS: Methods for quantitative spot filtering are discussed and new sets of quality scores for data preprocessing are designed. As measures of spot quality also reflect the quality of protocols, they were employed to find the optimal print buffer in an optimization experiment. In order to determine problematic arrays within a set of replicates we tested methods of outlier detection which can suitably replace the visual inspection of slides. CONTACT: Ursula.Sauer@arcs.ac.at. 相似文献
12.
N. DUDDING 《Cytopathology》1995,6(2):95-99
Rapid rescreening of approximately 30% of all negative and inadequate consecutive smears was carried out over a 26-month period. Smears (n = 24012) were rescreened using a × 6.3 objective only. Two minutes were allowed for each slide. Thirty-nine smears were found to have been incorrectly diagnosed as negative, a rate of 0.16%. This can be compared with the previous 26 months during which the traditional 1 in 10 random rescreening of unsatisfactory and negative smears had been carried out at a routine pace and with an objective of × 10. A total of 6866 smears were rescreened. Eleven were found to have been incorrectly diagnosed as negative, a rate of 0.16%. Rapid rescreening is as sensitive as 1 in 10 rescreening, and allows a greater proportion of smears to be rescreened. We propose rapid rescreening should replace the traditional 1 in 10 rescreening methods. 相似文献
13.
Burgoon LD Eckel-Passow JE Gennings C Boverhof DR Burt JW Fong CJ Zacharewski TR 《Nucleic acids research》2005,33(19):e172
Microarrays represent a powerful technology that provides the ability to simultaneously measure the expression of thousands of genes. However, it is a multi-step process with numerous potential sources of variation that can compromise data analysis and interpretation if left uncontrolled, necessitating the development of quality control protocols to ensure assay consistency and high-quality data. In response to emerging standards, such as the minimum information about a microarray experiment standard, tools are required to ascertain the quality and reproducibility of results within and across studies. To this end, an intralaboratory quality control protocol for two color, spotted microarrays was developed using cDNA microarrays from in vivo and in vitro dose-response and time-course studies. The protocol combines: (i) diagnostic plots monitoring the degree of feature saturation, global feature and background intensities, and feature misalignments with (ii) plots monitoring the intensity distributions within arrays with (iii) a support vector machine (SVM) model. The protocol is applicable to any laboratory with sufficient datasets to establish historical high- and low-quality data. 相似文献
14.
15.
16.
It has been suggested that codon volatility (the proportion of the point-mutation neighbors of a codon that encode different amino acids) can be used as an index of past positive selection. We compared codon volatility with patterns of synonymous and nonsynonymous nucleotide substitution in genome-wide comparisons of orthologous genes between three pairs of related genomes: (1) the protists Plasmodium falciparum and P. yoelii, (2) the fungi Saccharomyces cerevisiae and S. paradoxus, and (3) the mammals mouse and rat. Codon volatility was not consistently associated with an elevated rate of nonsynonymous substitution, as would be expected under positive selection. Rather, the most consistent and powerful correlate of elevated codon volatility was nucleotide content at the second codon position, as expected, given the nature of the genetic code. 相似文献
17.
18.
Partial screening was performed on 10 800 cervical smears, comprising 8640 filed negative and unsatisfactory smears and 2160 newly received smears prior to conventional screening. Each slide was screened for 30 s and those considered abnormal were reviewed by standard screening. Partial screening led to the detection of 27 additional infections and 44 additional cytological abnormalities. These detection rates are better than those obtained with the traditional method of rescreening only a proportion of smears. Amongst the smears partially screened before conventional screening, partial screening detected 37-66% of infections and 22-71% of cytological abnormalities. We recommend the use of partial rescreening of all negatively reported smears as a method of internal quality control in cervical cytology laboratories. 相似文献
19.
Normalization is an essential step in the analysis of high-throughput data. Multi-sample global normalization methods, such as quantile normalization, have been successfully used to remove technical variation. However, these methods rely on the assumption that observed global changes across samples are due to unwanted technical variability. Applying global normalization methods has the potential to remove biologically driven variation. Currently, it is up to the subject matter experts to determine if the stated assumptions are appropriate. Here, we propose a data-driven alternative. We demonstrate the utility of our method (quantro) through examples and simulations. A software implementation is available from http://www.bioconductor.org/packages/release/bioc/html/quantro.html.
Electronic supplementary material
The online version of this article (doi:10.1186/s13059-015-0679-0) contains supplementary material, which is available to authorized users. 相似文献20.
OBJECTIVE: Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1) to describe reporting of the Delphi method to develop quality indicators, 2) to discuss specific methodological skills for quality indicators selection 3) to give guidance about this practice. METHODOLOGY AND MAIN FINDING: Three electronic data bases were searched over a 30 years period (1978-2009). All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results) were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used). Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80) of studies reported response rates for all rounds, 60% (48/80) that feedback was given between rounds, 77% (62/80) the method used to achieve consensus and 57% (48/80) listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63%) with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31). In 40/70 (57%) studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40) of cases. Among 75 studies describing criteria to select quality indicators, 28 (37%) used validity and 17(23%) feasibility. CONCLUSION: The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys. 相似文献