首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Quantitative trait Loci analysis using the false discovery rate   总被引:15,自引:0,他引:15       下载免费PDF全文
Benjamini Y  Yekutieli D 《Genetics》2005,171(2):783-790
False discovery rate control has become an essential tool in any study that has a very large multiplicity problem. False discovery rate-controlling procedures have also been found to be very effective in QTL analysis, ensuring reproducible results with few falsely discovered linkages and offering increased power to discover QTL, although their acceptance has been slower than in microarray analysis, for example. The reason is partly because the methodological aspects of applying the false discovery rate to QTL mapping are not well developed. Our aim in this work is to lay a solid foundation for the use of the false discovery rate in QTL mapping. We review the false discovery rate criterion, the appropriate interpretation of the FDR, and alternative formulations of the FDR that appeared in the statistical and genetics literature. We discuss important features of the FDR approach, some stemming from new developments in FDR theory and methodology, which deem it especially useful in linkage analysis. We review false discovery rate-controlling procedures--the BH, the resampling procedure, and the adaptive two-stage procedure-and discuss the validity of these procedures in single- and multiple-trait QTL mapping. Finally we argue that the control of the false discovery rate has an important role in suggesting, indicating the significance of, and confirming QTL and present guidelines for its use.  相似文献   

2.
Wei Zou  Zhao-Bang Zeng 《Genetica》2009,137(2):125-134
To find the correlations between genome-wide gene expression variations and sequence polymorphisms in inbred cross populations, we developed a statistical method to claim expression quantitative trait loci (eQTL) in a genome. The method is based on multiple interval mapping (MIM), a model selection procedure, and uses false discovery rate (FDR) to measure the statistical significance of the large number of eQTL. We compared our method with a similar procedure proposed by Storey et al. and found that our method can be more powerful. We identified the features in the two methods that resulted in different statistical powers for eQTL detection, and confirmed them by simulation. We organized our computational procedure in an R package which can estimate FDR for positive findings from similar model selection procedures. The R package, MIM-eQTL, can be found at .  相似文献   

3.
Controlling the false discovery rate (FDR) has been proposed as an alternative to controlling the genome-wise error rate (GWER) for detecting quantitative trait loci (QTL) in genome scans. The objective here was to implement FDR in the context of regression interval mapping for multiple traits. Data on five traits from an F2 swine breed cross were used. FDR was implemented using tests at every 1 cM (FDR1) and using tests with the highest test statistic for each marker interval (FDRm). For the latter, a method was developed to predict comparison-wise error rates. At low error rates, FDR1 behaved erratically; FDRm was more stable but gave similar significance thresholds and number of QTL detected. At the same error rate, methods to control FDR gave less stringent significance thresholds and more QTL detected than methods to control GWER. Although testing across traits had limited impact on FDR, single-trait testing was recommended because there is no theoretical reason to pool tests across traits for FDR. FDR based on FDRm was recommended for QTL detection in interval mapping because it provides significance tests that are meaningful, yet not overly stringent, such that a more complete picture of QTL is revealed.  相似文献   

4.
In genome-wide genetic studies with a large number of markers, balancing the type I error rate and power is a challenging issue. Recently proposed false discovery rate (FDR) approaches are promising solutions to this problem. Using the 100 simulated datasets of a genome-wide marker map spaced about 3 cM and phenotypes from the Genetic Analysis Workshop 14, we studied the type I error rate and power of Storey's FDR approach, and compared it to the traditional Bonferroni procedure. We confirmed that Storey's FDR approach had a strong control of FDR. We found that Storey's FDR approach only provided weak control of family-wise error rate (FWER). For these simulated datasets, Storey's FDR approach only had slightly higher power than the Bonferroni procedure. In conclusion, Storey's FDR approach is more powerful than the Bonferroni procedure if strong control of FDR or weak control of FWER is desired. Storey's FDR approach has little power advantage over the Bonferroni procedure if there is low linkage disequilibrium among the markers. Further evaluation of the type I error rate and power of the FDR approaches for higher linkage disequilibrium and for haplotype analyses is warranted.  相似文献   

5.
Multiple testing (MT) with false discovery rate (FDR) control has been widely conducted in the “discrete paradigm” where p-values have discrete and heterogeneous null distributions. However, in this scenario existing FDR procedures often lose some power and may yield unreliable inference, and for this scenario there does not seem to be an FDR procedure that partitions hypotheses into groups, employs data-adaptive weights and is nonasymptotically conservative. We propose a weighted p-value-based FDR procedure, “weighted FDR (wFDR) procedure” for short, for MT in the discrete paradigm that efficiently adapts to both heterogeneity and discreteness of p-value distributions. We theoretically justify the nonasymptotic conservativeness of the wFDR procedure under independence, and show via simulation studies that, for MT based on p-values of binomial test or Fisher's exact test, it is more powerful than six other procedures. The wFDR procedure is applied to two examples based on discrete data, a drug safety study, and a differential methylation study, where it makes more discoveries than two existing methods.  相似文献   

6.
An interval quantitative trait locus (QTL) mapping method for complex polygenic diseases (as binary traits) showing QTL by environment interactions (QEI) was developed for outbred populations on a within-family basis. The main objectives, within the above context, were to investigate selection of genetic models and to compare liability or generalized interval mapping (GIM) and linear regression interval mapping (RIM) methods. Two different genetic models were used: one with main QTL and QEI effects (QEI model) and the other with only a main QTL effect (QTL model). Over 30 types of binary disease data as well as six types of continuous data were simulated and analysed by RIM and GIM. Using table values for significance testing, results show that RIM had an increased false detection rate (FDR) for testing interactions which was attributable to scale effects on the binary scale. GIM did not suffer from a high FDR for testing interactions. The use of empirical thresholds, which effectively means higher thresholds for RIM for testing interactions, could repair this increased FDR for RIM, but such empirical thresholds would have to be derived for each case because the amount of FDR depends on the incidence on the binary scale. RIM still suffered from higher biases (15-100% over- or under-estimation of true values) and high standard errors in QTL variance and location estimates than GIM for QEI models. Hence GIM is recommended for disease QTL mapping with QEI. In the presence of QEI, the model including QEI has more power (20-80% increase) to detect the QTL when the average QTL effect is small (in a situation where the model with a main QTL only is not too powerful). Top-down model selection is proposed in which a full test for QEI is conducted first and then the model is subsequently simplified. Methods and results will be applicable to human, plant and animal QTL mapping experiments.  相似文献   

7.
Mathematically-derived traits from two or more component traits, either by addition, subtraction, multiplication, or division, have been frequently used in genetics and breeding. When used in quantitative trait locus (QTL) mapping, derived traits sometimes show discrepancy with QTL identified for the component traits. We used three QTL distributions and three genetic effects models, and an actual maize mapping population, to investigate the efficiency of using derived traits in QTL mapping, and to understand the genetic and biological basis of derived-only QTL, i.e., QTL identified for a derived trait but not for any component trait. Results indicated that the detection power of the four putative QTL was consistently greater than 90% for component traits in simulated populations, each consisting of 200 recombinant inbred lines. Lower detection power and higher false discovery rate (FDR) were observed when derived traits were used. In an actual maize population, simulations were designed based on the observed QTL distributions and effects. When derived traits were used, QTL detected for both component and derived traits had comparable power, but those detected for component traits but not for derived traits had low detection power. The FDR from subtraction and division in the maize population were higher than the FDR from addition and multiplication. The use of derived traits increased the gene number, caused higher-order gene interactions than observed in component traits, and possibly complicated the linkage relationship between QTL as well. The increased complexity of the genetic architecture with derived traits may be responsible for the reduced detection power and the increased FDR. Derived-only QTL identified in practical genetic populations can be explained either as minor QTL that are not significant in QTL mapping of component traits, or as false positives.  相似文献   

8.
The use of multiple hypothesis testing procedures has been receiving a lot of attention recently by statisticians in DNA microarray analysis. The traditional FWER controlling procedures are not very useful in this situation since the experiments are exploratory by nature and researchers are more interested in controlling the rate of false positives rather than controlling the probability of making a single erroneous decision. This has led to increased use of FDR (False Discovery Rate) controlling procedures. Genovese and Wasserman proposed a single-step FDR procedure that is an asymptotic approximation to the original Benjamini and Hochberg stepwise procedure. In this paper, we modify the Genovese-Wasserman procedure to force the FDR control closer to the level alpha in the independence setting. Assuming that the data comes from a mixture of two normals, we also propose to make this procedure adaptive by first estimating the parameters using the EM algorithm and then using these estimated parameters into the above modification of the Genovese-Wasserman procedure. We compare this procedure with the original Benjamini-Hochberg and the SAM thresholding procedures. The FDR control and other properties of this adaptive procedure are verified numerically.  相似文献   

9.
One of multiple testing problems in drug finding experiments is the comparison of several treatments with one control. In this paper we discuss a particular situation of such an experiment, i.e., a microarray setting, where the many-to-one comparisons need to be addressed for thousands of genes simultaneously. For a gene-specific analysis, Dunnett's single step procedure is considered within gene tests, while the FDR controlling procedures such as Significance Analysis of Microarrays (SAM) and Benjamini and Hochberg (BH) False Discovery Rate (FDR) adjustment are applied to control the error rate across genes. The method is applied to a microarray experiment with four treatment groups (three microarrays in each group) and 16,998 genes. Simulation studies are conducted to investigate the performance of the SAM method and the BH-FDR procedure with regard to controlling the FDR, and to investigate the effect of small-variance genes on the FDR in the SAM procedure.  相似文献   

10.
It is typical in QTL mapping experiments that the number of markers under investigation is large. This poses a challenge to commonly used regression models since the number of feature variables is usually much larger than the sample size, especially, when epistasis effects are to be considered. The greedy nature of the conventional stepwise procedures is well known and is even more conspicuous in such cases. In this article, we propose a two-phase procedure based on penalized likelihood techniques and extended Bayes information criterion (EBIC) for QTL mapping. The procedure consists of a screening phase and a selection phase. In the screening phase, the main and interaction features are alternatively screened by a penalized likelihood mechanism. In the selection phase, a low-dimensional approach using EBIC is applied to the features retained in the screening phase to identify QTL. The two-phase procedure has the asymptotic property that its positive detection rate (PDR) and false discovery rate (FDR) converge to 1 and 0, respectively, as sample size goes to infinity. The two-phase procedure is compared with both traditional and recently developed approaches by simulation studies. A real data analysis is presented to demonstrate the application of the two-phase procedure.  相似文献   

11.
The Newman-Keuls (NK) procedure for testing all pairwise comparisons among a set of treatment means, introduced by Newman (1939) and in a slightly different form by Keuls (1952) was proposed as a reasonable way to alleviate the inflation of error rates when a large number of means are compared. It was proposed before the concepts of different types of multiple error rates were introduced by Tukey (1952a, b; 1953). Although it was popular in the 1950s and 1960s, once control of the familywise error rate (FWER) was accepted generally as an appropriate criterion in multiple testing, and it was realized that the NK procedure does not control the FWER at the nominal level at which it is performed, the procedure gradually fell out of favor. Recently, a more liberal criterion, control of the false discovery rate (FDR), has been proposed as more appropriate in some situations than FWER control. This paper notes that the NK procedure and a nonparametric extension controls the FWER within any set of homogeneous treatments. It proves that the extended procedure controls the FDR when there are well-separated clusters of homogeneous means and between-cluster test statistics are independent, and extensive simulation provides strong evidence that the original procedure controls the FDR under the same conditions and some dependent conditions when the clusters are not well-separated. Thus, the test has two desirable error-controlling properties, providing a compromise between FDR control with no subgroup FWER control and global FWER control. Yekutieli (2002) developed an FDR-controlling procedure for testing all pairwise differences among means, without any FWER-controlling criteria when there is more than one cluster. The empirica example in Yekutieli's paper was used to compare the Benjamini-Hochberg (1995) method with apparent FDR control in this context, Yekutieli's proposed method with proven FDR control, the Newman-Keuls method that controls FWER within equal clusters with apparent FDR control, and several methods that control FWER globally. The Newman-Keuls is shown to be intermediate in number of rejections to the FWER-controlling methods and the FDR-controlling methods in this example, although it is not always more conservative than the other FDR-controlling methods.  相似文献   

12.
Epistasis is a commonly observed genetic phenomenon and an important source of variation of complex traits,which could maintain additive variance and therefore assure the long-term genetic gain in breeding.Inclusive composite interval mapping(ICIM) is able to identify epistatic quantitative trait loci(QTLs) no matter whether the two interacting QTLs have any additive effects.In this article,we conducted a simulation study to evaluate detection power and false discovery rate(FDR) of ICIM epistatic mapping,by considering F2 and doubled haploid(DH) populations,different F2 segregation ratios and population sizes.Results indicated that estimations of QTL locations and effects were unbiased,and the detection power of epistatic mapping was largely affected by population size,heritability of epistasis,and the amount and distribution of genetic effects.When the same likelihood of odd(LOD) threshold was used,detection power of QTL was higher in F2 population than power in DH population;meanwhile FDR in F2 was also higher than that in DH.The increase of marker density from 10 cM to 5 cM led to similar detection power but higher FDR.In simulated populations,ICIM achieved better mapping results than multiple interval mapping(MIM) in estimation of QTL positions and effect.At the end,we gave epistatic mapping results of ICIM in one actual population in rice(Oryza sativa L.).  相似文献   

13.
Hybrids with low grain moisture (GM) at harvest are specially required in mid- to short-season environments. One of the most important factors determining this trait is field grain drying rate (FDR). To produce hybrids with low GM at harvest, inbred lines can be obtained through selection for either GM or FDR. Thus, a single-cross population (181 F 2:3-generation plants) of two divergent inbred lines was evaluated to locate QTL affecting GM at harvest and FDR as a starting point for marker assisted selection (MAS). Moisture measurements were made with a hand-held moisture meter. Detection of QTL was facilitated with interval mapping in one and two dimensions including an interaction term, and a genetic linkage map of 122 SSR loci covering 1,557.8 cM. The markers were arranged in ten linkage groups. QTL mapping was made for the mean trait performance of the F 2:3 population across years. Ten QTL and an interaction were associated with GM. These QTL accounted for 54.8 and 65.2% of the phenotypic and genotypic variation, respectively. Eight QTL and two interactions were associated with FDR accounting for 35.7 and 45.2% of the phenotypic and genotypic variation, respectively. Two regions were in common between traits. The interaction between QTL for GM at harvest had practical implications for MAS. We conclude that MAS per se will not be an efficient method for reducing GM at harvest and/or increasing FDR. A selection index including both molecular marker information and phenotypic values, each appropriately weighted, would be the best selection strategy.  相似文献   

14.
The search for pairs (dyads) of related individuals in large databases of DNA-profiles has become an increasingly important inference tool in ecology. However, the many, partly dependent, pairwise comparisons introduce statistical issues. We show that the false discovery rate (FDR) procedure is well suited to control for the proportion of false positives, i.e. dyads consisting of unrelated individuals, which under normal circumstances would have been labelled as related individuals. We verify the behaviour of the standard FDR procedure by simulation, demonstrating that the FDR procedure works satisfactory in spite of the many dependent pairwise comparisons involved in an exhaustive database screening. A computer program that implements this method is available online. In addition, we propose to implement a second stage in the procedure, in which additional independent genetic markers are used to identify the false positives. We demonstrate the application of the approach in an analysis of a DNA database consisting of 3300 individual minke whales (Balaenoptera acutorostrata) each typed at ten microsatellite loci. Applying the standard procedure with an FDR of 50% led to the identification of 74 putative dyads of 1st- or 2nd-order relatives. However, introducing the second step, which involved additional genotypes at 15 microsatellite loci, revealed that only 21 of the putative dyads can be claimed with high certainty to be true dyads.  相似文献   

15.
Recently, Efron (2007) provided methods for assessing the effect of correlation on false discovery rate (FDR) in large‐scale testing problems in the context of microarray data. Although FDR procedure does not require independence of the tests, existence of correlation grossly under‐ or overestimates the number of critical genes. Here, we briefly review Efron's method and apply it to a relatively smaller spectrometry proteomics data. We show that even here the correlation can affect the FDR values and the number of proteins declared as critical.  相似文献   

16.
The paper is concerned with expected type I errors of some stepwise multiple test procedures based on independent p‐values controlling the so‐called false discovery rate (FDR). We derive an asymptotic result for the supremum of the expected type I error rate(EER) when the number of hypotheses tends to infinity. Among others, it will be shown that when the original Benjamini‐Hochberg step‐up procedure controls the FDR at level α, its EER may approach a value being slightly larger than α/4 when the number of hypotheses increases. Moreover, we derive some least favourable parameter configuration results, some bounds for the FDR and the EER as well as easily computable formulae for the familywise error rate (FWER) of two FDR‐controlling procedures. Finally, we discuss some undesirable properties of the FDR concept, especially the problem of cheating.  相似文献   

17.
The target-decoy database search strategy is widely accepted as a standard method for estimating the false discovery rate (FDR) of peptide identification, based on which peptide-spectrum matches (PSMs) from the target database are filtered. To improve the sensitivity of protein identification given a fixed accuracy (frequently defined by a protein FDR threshold), a postprocessing procedure is often used that integrates results from different peptide search engines that had assayed the same data set. In this work, we show that PSMs that are grouped by the precursor charge, the number of missed internal cleavage sites, the modification state, and the numbers of protease termini and that the proteins grouped by their unique peptide count should be filtered separately according to the given FDR. We also develop an iterative procedure to filter the PSMs and proteins simultaneously, according to the given FDR. Finally, we present a general framework to integrate the results from different peptide search engines using the same FDR threshold. Our method was tested with several shotgun proteomics data sets that were acquired by multiple LC/MS instruments from two different biological samples. The results showed a satisfactory performance. We implemented the method in a user-friendly software package called BuildSummary, which can be downloaded for free from http://www.proteomics.ac.cn/software/proteomicstools/index.htm as part of the software suite ProteomicsTools.  相似文献   

18.
Wavelet thresholding with bayesian false discovery rate control   总被引:1,自引:0,他引:1  
The false discovery rate (FDR) procedure has become a popular method for handling multiplicity in high-dimensional data. The definition of FDR has a natural Bayesian interpretation; it is the expected proportion of null hypotheses mistakenly rejected given a measure of evidence for their truth. In this article, we propose controlling the positive FDR using a Bayesian approach where the rejection rule is based on the posterior probabilities of the null hypotheses. Correspondence between Bayesian and frequentist measures of evidence in hypothesis testing has been studied in several contexts. Here we extend the comparison to multiple testing with control of the FDR and illustrate the procedure with an application to wavelet thresholding. The problem consists of recovering signal from noisy measurements. This involves extracting wavelet coefficients that result from true signal and can be formulated as a multiple hypotheses-testing problem. We use simulated examples to compare the performance of our approach to the Benjamini and Hochberg (1995, Journal of the Royal Statistical Society, Series B57, 289-300) procedure. We also illustrate the method with nuclear magnetic resonance spectral data from human brain.  相似文献   

19.
Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via the Barndorff-Nielsen and Shephard (BNS) test statistic, and control the FDR with the Benjamini and Hochberg (BH) procedure. We provide asymptotic results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling the FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data.  相似文献   

20.
Many exploratory microarray data analysis tools such as gene clustering and relevance networks rely on detecting pairwise gene co-expression. Traditional screening of pairwise co-expression either controls biological significance or statistical significance, but not both. The former approach does not provide stochastic error control, and the later approach screens many co-expressions with excessively low correlation. We have designed and implemented a statistically sound two-stage co-expression detection algorithm that controls both statistical significance (false discovery rate, FDR) and biological significance (minimum acceptable strength, MAS) of the discovered co-expressions. Based on estimation of pairwise gene correlation, the algorithm provides an initial co-expression discovery that controls only FDR, which is then followed by a second stage co-expression discovery which controls both FDR and MAS. It also computes and thresholds the set of FDR p-values for each correlation that satisfied the MAS criterion. Using simulated data, we validated asymptotic null distributions of the Pearson and Kendall correlation coefficients and the two-stage error-control procedure; we also compared our two-stage test procedure with another two-stage test procedure using the receiver operating characteristic (ROC) curve. We then used yeast galactose metabolism data to illustrate the advantage of our method for clustering genes and constructing a relevance network. The method has been implemented in an R package "GeneNT" that is freely available from the Comprehensive R Archive Network (CRAN): www.cran.r-project.org/.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号