共查询到20条相似文献,搜索用时 15 毫秒
1.
MOTIVATION: Microarray experiments often involve hundreds or thousands of genes. In a typical experiment, only a fraction of genes are expected to be differentially expressed; in addition, the measured intensities among different genes may be correlated. Depending on the experimental objectives, sample size calculations can be based on one of the three specified measures: sensitivity, true discovery and accuracy rates. The sample size problem is formulated as: the number of arrays needed in order to achieve the desired fraction of the specified measure at the desired family-wise power at the given type I error and (standardized) effect size. RESULTS: We present a general approach for estimating sample size under independent and equally correlated models using binomial and beta-binomial models, respectively. The sample sizes needed for a two-sample z-test are computed; the computed theoretical numbers agree well with the Monte Carlo simulation results. But, under more general correlation structures, the beta-binomial model can underestimate the needed samples by about 1-5 arrays. CONTACT: jchen@nctr.fda.gov. 相似文献
2.
Background
Microarray experiments are often performed with a small number of biological replicates, resulting in low statistical power for detecting differentially expressed genes and concomitant high false positive rates. While increasing sample size can increase statistical power and decrease error rates, with too many samples, valuable resources are not used efficiently. The issue of how many replicates are required in a typical experimental system needs to be addressed. Of particular interest is the difference in required sample sizes for similar experiments in inbred vs. outbred populations (e.g. mouse and rat vs. human). 相似文献3.
Microarray technology allows simultaneous comparison of expression levels of thousands of genes under each condition. This paper concerns sample size calculation in the identification of differentially expressed genes between a control and a treated sample. In a typical experiment, only a fraction of genes (altered genes) is expected to be differentially expressed between two samples. Sample size determination depends on a number of factors including the specified significance level (alpha), the desired statistical power (1-beta), the fraction (eta) of truly altered genes out of the total g genes studied, and the effect sizes (Delta) for the altered genes. This paper proposes a method to calculate the number of arrays required to detect at least 100lambda % (where 0 < lambda < or = 1) of the truly altered genes under the model of an equal effect size for all altered genes. The required numbers of arrays are tabulated for various values of alpha, beta, Delta, eta, and lambda for the one-sample and two-sample t-tests for g = 10,000. Based on the proposed approach, to identify up to 90% of truly altered genes among the unknown number of truly altered genes, the estimated numbers of arrays needed appear to be manageable. For instance, when the standardized effect size is at least 2.0, the number of arrays needed is less than or equal to 14 for the two-sample t-test and is less than or equal to 10 for the one-sample t-test. As the cost per array declines, such array numbers become practical. The proposed method offers a simple, intuitive, and practical way to determine the number of arrays needed in microarray experiments in which the true correlation structure among the genes under investigation cannot be reasonably assumed. An example dataset is used to illustrate the use of the proposed approach to plan microarray experiments. 相似文献
4.
Jung SH 《Bioinformatics (Oxford, England)》2005,21(14):3097-3104
We consider identifying differentially expressing genes between two patient groups using microarray experiment. We propose a sample size calculation method for a specified number of true rejections while controlling the false discovery rate at a desired level. Input parameters for the sample size calculation include the allocation proportion in each group, the number of genes in each array, the number of differentially expressing genes and the effect sizes among the differentially expressing genes. We have a closed-form sample size formula if the projected effect sizes are equal among differentially expressing genes. Otherwise, our method requires a numerical method to solve an equation. Simulation studies are conducted to show that the calculated sample sizes are accurate in practical settings. The proposed method is demonstrated with a real study. 相似文献
5.
Dell RB Holleran S Ramakrishnan R 《ILAR journal / National Research Council, Institute of Laboratory Animal Resources》2002,43(4):207-213
Scientists who use animals in research must justify the number of animals to be used, and committees that review proposals to use animals in research must review this justification to ensure the appropriateness of the number of animals to be used. This article discusses when the number of animals to be used can best be estimated from previous experience and when a simple power and sample size calculation should be performed. Even complicated experimental designs requiring sophisticated statistical models for analysis can usually be simplified to a single key or critical question so that simple formulae can be used to estimate the required sample size. Approaches to sample size estimation for various types of hypotheses are described, and equations are provided in the Appendix. Several web sites are cited for more information and for performing actual calculations 相似文献
6.
7.
Microarray technology is rapidly emerging for genome-wide screening of differentially expressed genes between clinical subtypes or different conditions of human diseases. Traditional statistical testing approaches, such as the two-sample t-test or Wilcoxon test, are frequently used for evaluating statistical significance of informative expressions but require adjustment for large-scale multiplicity. Due to its simplicity, Bonferroni adjustment has been widely used to circumvent this problem. It is well known, however, that the standard Bonferroni test is often very conservative. In the present paper, we compare three multiple testing procedures in the microarray context: the original Bonferroni method, a Bonferroni-type improved single-step method and a step-down method. The latter two methods are based on nonparametric resampling, by which the null distribution can be derived with the dependency structure among gene expressions preserved and the family-wise error rate accurately controlled at the desired level. We also present a sample size calculation method for designing microarray studies. Through simulations and data analyses, we find that the proposed methods for testing and sample size calculation are computationally fast and control error and power precisely. 相似文献
8.
MOTIVATION: Two-color microarray experiments in which an aliquot derived from a common RNA sample is placed on each array are called reference designs. Traditionally, microarray experiments have used reference designs, but designs without a reference have recently been proposed as alternatives. RESULTS: We develop a statistical model that distinguishes the different levels of variation typically present in cancer data, including biological variation among RNA samples, experimental error and variation attributable to phenotype. Within the context of this model, we examine the reference design and two designs which do not use a reference, the balanced block design and the loop design, focusing particularly on efficiency of estimates and the performance of cluster analysis. We calculate the relative efficiency of designs when there are a fixed number of arrays available, and when there are a fixed number of samples available. Monte Carlo simulation is used to compare the designs when the objective is class discovery based on cluster analysis of the samples. The number of discrepancies between the estimated clusters and the true clusters were significantly smaller for the reference design than for the loop design. The efficiency of the reference design relative to the loop and block designs depends on the relation between inter- and intra-sample variance. These results suggest that if cluster analysis is a major goal of the experiment, then a reference design is preferable. If identification of differentially expressed genes is the main concern, then design selection may involve a consideration of several factors. 相似文献
9.
Motivation: Owing to the experimental cost and difficulty inobtaining biological materials, it is essential to considerappropriate sample sizes in microarray studies. With the growinguse of the False Discovery Rate (FDR) in microarray analysis,an FDR-based sample size calculation is essential. Method: We describe an approach to explicitly connect the samplesize to the FDR and the number of differentially expressed genesto be detected. The method fits parametric models for degreeof differential expression using the ExpectationMaximizationalgorithm. Results: The applicability of the method is illustrated withsimulations and studies of a lung microarray dataset. We proposeto use a small training set or published data from relevantbiological settings to calculate the sample size of an experiment. Availability: Code to implement the method in the statisticalpackage R is available from the authors. Contact: jhu{at}mdanderson.org 相似文献
10.
J Nam 《Biometrics》1992,48(2):389-395
Woolson, Bean, and Rojas (1986, Biometrics 42, 927-932) present a simple approximation of sample size for Cochran's (1954, Biometrics 10, 417-451) test for detecting association between exposure and disease. It is useful in the design of case-control studies. We derive a sample size formula for Cochran's statistic with continuity correction which guarantees that the actual Type I error rate of the test does not exceed the nominal level. The corrected sample size is necessarily larger than the uncorrected one given by Woolson et al. and the relative difference between the two sample sizes is considerable. Allocation of equal number of cases and controls within each stratum is asymptotically optimal when the costs per case and control are the same. When any effect of stratification is absent, Cochran's stratified test, although valid, is less efficient than the unstratified one except for the important case of a balanced design. 相似文献
11.
Many gene expression studies attempt to develop a predictor of pre-defined diagnostic or prognostic classes. If the classes are similar biologically, then the number of genes that are differentially expressed between the classes is likely to be small compared to the total number of genes measured. This motivates a two-step process for predictor development, a subset of differentially expressed genes is selected for use in the predictor and then the predictor constructed from these. Both these steps will introduce variability into the resulting classifier, so both must be incorporated in sample size estimation. We introduce a methodology for sample size determination for prediction in the context of high-dimensional data that captures variability in both steps of predictor development. The methodology is based on a parametric probability model, but permits sample size computations to be carried out in a practical manner without extensive requirements for preliminary data. We find that many prediction problems do not require a large training set of arrays for classifier development. 相似文献
12.
MOTIVATION: There is not a widely applicable method to determine the sample size for experiments basing statistical significance on the false discovery rate (FDR). RESULTS: We propose and develop the anticipated FDR (aFDR) as a conceptual tool for determining sample size. We derive mathematical expressions for the aFDR and anticipated average statistical power. These expressions are used to develop a general algorithm to determine sample size. We provide specific details on how to implement the algorithm for a k-group (k > or = 2) comparisons. The algorithm performs well for k-group comparisons in a series of traditional simulations and in a real-data simulation conducted by resampling from a large, publicly available dataset. AVAILABILITY: Documented S-plus and R code libraries are freely available from www.stjuderesearch.org/depts/biostats. 相似文献
13.
14.
Single-nucleotide polymorphisms (SNPs), believed to determine human differences, are widely used to predict risk of diseases. Typically, clinical samples are limited and/or the sampling cost is high. Thus, it is essential to determine an adequate sample size needed to build a classifier based on SNPs. Such a classifier would facilitate correct classifications, while keeping the sample size to a minimum, thereby making the studies cost-effective. For coded SNP data from 2 classes, an optimal classifier and an approximation to its probability of correct classification (PCC) are derived. A linear classifier is constructed and an approximation to its PCC is also derived. These approximations are validated through a variety of Monte Carlo simulations. A sample size determination algorithm based on the criterion, which ensures that the difference between the 2 approximate PCCs is below a threshold, is given and its effectiveness is illustrated via simulations. For the HapMap data on Chinese and Japanese populations, a linear classifier is built using 51 independent SNPs, and the required total sample sizes are determined using our algorithm, as the threshold varies. For example, when the threshold value is 0.05, our algorithm determines a total sample size of 166 (83 for Chinese and 83 for Japanese) that satisfies the criterion. 相似文献
15.
Methods for identifying differentially expressed genes were compared on time-series microarray data simulated from artificial gene networks. Select methods were further analyzed on existing immune response data of Boldrick et al. (2002, Proc. Natl. Acad. Sci. USA 99, 972-977). Based on the simulations, we recommend the ANOVA variants of Cui and Churchill. Efron and Tibshirani's empirical Bayes Wilcoxon rank sum test is recommended when the background cannot be effectively corrected. Our proposed GSVD-based differential expression method was shown to detect subtle changes. ANOVA combined with GSVD was consistent on background-normalized simulation data. GSVD with empirical Bayes was consistent without background correction. Based on the Boldrick et al. data, ANOVA is best suited to detect changes in temporal data, while GSVD and empirical Bayes effectively detect individual spikes or overall shifts, respectively. For methods tested on simulation data, lowess after background correction improved results. On simulation data without background correction, lowess decreased performance compared to median centering. 相似文献
16.
In this article, we compare Wald-type, logarithmic transformation, and Fieller-type statistics for the classical 2-sided equivalence testing of the rate ratio under matched-pair designs with a binary end point. These statistics can be implemented through sample-based, constrained least squares estimation and constrained maximum likelihood (CML) estimation methods. Sample size formulae based on the CML estimation method are developed. We consider formulae that control a prespecified power or confidence width. Our simulation studies show that statistics based on the CML estimation method generally outperform other statistics and methods with respect to actual type I error rate and average width of confidence intervals. Also, the corresponding sample size formulae are valid asymptotically in the sense that the exact power and actual coverage probability for the estimated sample size are generally close to their prespecified values. The methods are illustrated with a real example from a clinical laboratory study. 相似文献
17.
A comparison of meta-analysis methods for detecting differentially expressed genes in microarray experiments 总被引:3,自引:0,他引:3
Motivation: The proliferation of public data repositories createsa need for meta-analysis methods to efficiently evaluate, integrateand validate related datasets produced by independent groups.A t-based approach has been proposed to integrate effect sizefrom multiple studies by modeling both intra- and between-studyvariation. Recently, a non-parametric rank productmethod, which is derived based on biological reasoning of fold-changecriteria, has been applied to directly combine multiple datasetsinto one meta study. Fisher's Inverse 2 method, which only dependson P-values from individual analyses of each dataset, has beenused in a couple of medical studies. While these methods addressthe question from different angles, it is not clear how theycompare with each other. Results: We comparatively evaluate the three methods; t-basedhierarchical modeling, rank products and Fisher's Inverse 2test with P-values from either the t-based or the rank productmethod. A simulation study shows that the rank product method,in general, has higher sensitivity and selectivity than thet-based method in both individual and meta-analysis, especiallyin the setting of small sample size and/or large between-studyvariation. Not surprisingly, Fisher's 2 method highly dependson the method used in the individual analysis. Application toreal datasets demonstrates that meta-analysis achieves morereliable identification than an individual analysis, and rankproducts are more robust in gene ranking, which leads to a muchhigher reproducibility among independent studies. Though t-basedmeta-analysis greatly improves over the individual analysis,it suffers from a potentially large amount of false positiveswhen P-values serve as threshold. We conclude that careful meta-analysisis a powerful tool for integrating multiple array studies. Contact: fxhong{at}jimmy.harvard.edu Supplementary information: Supplementary data are availableat Bioinformatics online.
Associate Editor: David Rocke
Present address: Department of Biostatistics and ComputationalBiology, Dana-Farber Cancer Institute, Harvard School of PublicHealth, 44 Binney Street, Boston, MA 02115, USA. 相似文献
18.
The local false discovery rate (LFDR) estimates the probability of falsely identifying specific genes with changes in expression. In computer simulations, LFDR <10% successfully identified genes with changes in expression, while LFDR >90% identified genes without changes. We used LFDR to compare different microarray experiments quantitatively: (i) Venn diagrams of genes with and without changes in expression, (ii) scatter plots of the genes, (iii) correlation coefficients in the scatter plots and (iv) distributions of gene function. To illustrate, we compared three methods for pre-processing microarray data. Correlations between methods were high (r = 0.84–0.92). However, responses were often different in magnitude, and sometimes discordant, even though the methods used the same raw data. LFDR complements functional assessments like gene set enrichment analysis. To illustrate, we compared responses to ultraviolet radiation (UV), ionizing radiation (IR) and tobacco smoke. Compared to unresponsive genes, genes responsive to both UV and IR were enriched for cell cycle, mitosis, and DNA repair functions. Genes responsive to UV but not IR were depleted for cell adhesion functions. Genes responsive to tobacco smoke were enriched for detoxification functions. Thus, LFDR reveals differences and similarities among experiments. 相似文献
19.
20.
DNA microarray technology provides useful tools for profiling global gene expression patterns in different cell/tissue samples. One major challenge is the large number of genes relative to the number of samples. The use of all genes can suppress or reduce the performance of a classification rule due to the noise of nondiscriminatory genes. Selection of an optimal subset from the original gene set becomes an important prestep in sample classification. In this study, we propose a family-wise error (FWE) rate approach to selection of discriminatory genes for two-sample or multiple-sample classification. The FWE approach controls the probability of the number of one or more false positives at a prespecified level. A public colon cancer data set is used to evaluate the performance of the proposed approach for the two classification methods: k nearest neighbors (k-NN) and support vector machine (SVM). The selected gene sets from the proposed procedure appears to perform better than or comparable to several results reported in the literature using the univariate analysis without performing multivariate search. In addition, we apply the FWE approach to a toxicogenomic data set with nine treatments (a control and eight metals, As, Cd, Ni, Cr, Sb, Pb, Cu, and AsV) for a total of 55 samples for a multisample classification. Two gene sets are considered: the gene set omegaF formed by the ANOVA F-test, and a gene set omegaT formed by the union of one-versus-all t-tests. The predicted accuracies are evaluated using the internal and external crossvalidation. Using the SVM classification, the overall accuracies to predict 55 samples into one of the nine treatments are above 80% for internal crossvalidation. OmegaF has slightly higher accuracy rates than omegaT. The overall predicted accuracies are above 70% for the external crossvalidation; the two gene sets omegaT and omegaF performed equally well. 相似文献