首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The experimental power of a granddaughter design to detect quantitative trait loci (QTL) in dairy cattle is often limited by the availability of progeny-tested sires, by the ignoring of already identified QTL in the statistical analysis, and by the application of stringent experimentwise significance levels. This study describes an experiment that addressed these points. A large granddaughter design was set up that included sires from two countries (Germany and France), resulting in almost 2000 sires. The animals were genotyped for markers on nine different chromosomes. The QTL analysis was done for six traits separately using a multimarker regression that included putative QTL on other chromosomes as cofactors in the model. Different variants of the false discovery rate (FDR) were applied. Two of them accounted for the proportion of truly null hypotheses, which were estimated to be 0.28 and 0.3, respectively, and were therefore tailored to the experiment. A total of 25 QTL could be mapped when cofactors were included in the model-7 more than without cofactors. Controlling the FDR at 0.05 revealed 31 QTL for the two FDR methods that accounted for the proportion of truly null hypotheses. The relatively high power of this study can be attributed to the size of the experiment, to the QTL analysis with cofactors, and to the application of an appropriate FDR.  相似文献   

2.
In this article, we consider the probabilistic identification of amino acid positions that evolve under positive selection as a multiple hypothesis testing problem. The null hypothesis "H0,s: site s evolves under a negative selection or under a neutral process of evolution" is tested at each codon site of the alignment of homologous coding sequences. Standard hypothesis testing is based on the control of the expected proportion of falsely rejected null hypotheses or type-I error rate. As the number of tests increases, however, the power of an individual test may become unacceptably low. Recent advances in statistics have shown that the false discovery rate--in this case, the expected proportion of sites that do not evolve under positive selection among those that are estimated to evolve under this selection regime--is a quantity that can be controlled. Keeping the proportion of false positives low among the significant results generally leads to an increase in power. In this article, we show that controlling the false detection rate is relevant when searching for positively selected sites. We also compare this new approach to traditional methods using extensive simulations.  相似文献   

3.
Controlling the proportion of false positives in multiple dependent tests   总被引:4,自引:0,他引:4  
Genome scan mapping experiments involve multiple tests of significance. Thus, controlling the error rate in such experiments is important. Simple extension of classical concepts results in attempts to control the genomewise error rate (GWER), i.e., the probability of even a single false positive among all tests. This results in very stringent comparisonwise error rates (CWER) and, consequently, low experimental power. We here present an approach based on controlling the proportion of false positives (PFP) among all positive test results. The CWER needed to attain a desired PFP level does not depend on the correlation among the tests or on the number of tests as in other approaches. To estimate the PFP it is necessary to estimate the proportion of true null hypotheses. Here we show how this can be estimated directly from experimental results. The PFP approach is similar to the false discovery rate (FDR) and positive false discovery rate (pFDR) approaches. For a fixed CWER, we have estimated PFP, FDR, pFDR, and GWER through simulation under a variety of models to illustrate practical and philosophical similarities and differences among the methods.  相似文献   

4.
Chen L  Storey JD 《Genetics》2006,173(4):2371-2381
Linkage analysis involves performing significance tests at many loci located throughout the genome. Traditional criteria for declaring a linkage statistically significant have been formulated with the goal of controlling the rate at which any single false positive occurs, called the genomewise error rate (GWER). As complex traits have become the focus of linkage analysis, it is increasingly common to expect that a number of loci are truly linked to the trait. This is especially true in mapping quantitative trait loci (QTL), where sometimes dozens of QTL may exist. Therefore, alternatives to the strict goal of preventing any single false positive have recently been explored, such as the false discovery rate (FDR) criterion. Here, we characterize some of the challenges that arise when defining relaxed significance criteria that allow for at least one false positive linkage to occur. In particular, we show that the FDR suffers from several problems when applied to linkage analysis of a single trait. We therefore conclude that the general applicability of FDR for declaring significant linkages in the analysis of a single trait is dubious. Instead, we propose a significance criterion that is more relaxed than the traditional GWER, but does not appear to suffer from the problems of the FDR. A generalized version of the GWER is proposed, called GWERk, that allows one to provide a more liberal balance between true positives and false positives at no additional cost in computation or assumptions.  相似文献   

5.
The simultaneous testing of a large number of hypotheses in a genome scan, using individual thresholds for significance, inherently leads to inflated genome-wide false positive rates. There exist various approaches to approximating the correct genomewide p-values under various assumptions, either by way of asymptotics or simulations. We explore a philosophically different criterion, recently proposed in the literature, which controls the false discovery rate. The test statistics are assumed to arise from a mixture of distributions under the null and non-null hypotheses. We fit the mixture distribution using both a nonparametric approach and commingling analysis, and then apply the local false discovery rate to select cut-off points for regions to be declared interesting. Another criterion, the minimum total error, is also explored. Both criteria seem to be sensible alternatives to controlling the classical type I and type II error rates.  相似文献   

6.

Background

When conducting multiple hypothesis tests, it is important to control the number of false positives, or the False Discovery Rate (FDR). However, there is a tradeoff between controlling FDR and maximizing power. Several methods have been proposed, such as the q-value method, to estimate the proportion of true null hypothesis among the tested hypotheses, and use this estimation in the control of FDR. These methods usually depend on the assumption that the test statistics are independent (or only weakly correlated). However, many types of data, for example microarray data, often contain large scale correlation structures. Our objective was to develop methods to control the FDR while maintaining a greater level of power in highly correlated datasets by improving the estimation of the proportion of null hypotheses.

Results

We showed that when strong correlation exists among the data, which is common in microarray datasets, the estimation of the proportion of null hypotheses could be highly variable resulting in a high level of variation in the FDR. Therefore, we developed a re-sampling strategy to reduce the variation by breaking the correlations between gene expression values, then using a conservative strategy of selecting the upper quartile of the re-sampling estimations to obtain a strong control of FDR.

Conclusion

With simulation studies and perturbations on actual microarray datasets, our method, compared to competing methods such as q-value, generated slightly biased estimates on the proportion of null hypotheses but with lower mean square errors. When selecting genes with controlling the same FDR level, our methods have on average a significantly lower false discovery rate in exchange for a minor reduction in the power.  相似文献   

7.
We have used the results of an experiment mapping quantitative trait loci (QTL) affecting milk yield and composition to estimate the total number of QTL affecting these traits. We did this by estimating the number of segregating QTL within a half-sib daughter design using logic similar to that used to estimate the "false discovery rate" (FDR). In a half-sib daughter design with six sire families we estimate that the average sire was heterozygous for approximately 5 QTL per trait. Also, in most cases only one sire was heterozygous for any one QTL; therefore at least 30 QTL were likely to be segregating for these milk production traits in this Holstein population.  相似文献   

8.
Effects of individual quantitative trait loci (QTLs) can be isolated with the aid of linked genetic markers. Most studies have analyzed each marker or pair of linked markers separately for each trait included in the analysis. Thus, the number of contrasts tested can be quite large. The experimentwise type-I error can be readily derived from the nominal type-I error if all contrasts are statistically independent, but different traits are generally correlated. A new set of uncorrelated traits can be derived by application of a canonical transformation. The total number of effective traits will generally be less than the original set. An example is presented for DNA microsatellite D21S4, which is used as a marker for milk production traits of Israeli dairy cattle. This locus had significant effects on milk and protein production but not on fat. It had a significant effect on only one of the canonical variables that was highly correlated with both milk and protein, and this variable explained 82% of the total variance. Thus, it can be concluded that a single QTL is affecting both traits. The effects on the original traits could be derived by a reverse transformation of the effects on the canonical variable.  相似文献   

9.
False discoveries and models for gene discovery   总被引:10,自引:0,他引:10  
In the search for genes underlying complex traits, there is a tendency to impose increasingly stringent criteria to avoid false discoveries. These stringent criteria make it hard to find true effects, and we argue that it might be better to optimize our procedures for eliminating and controlling false discoveries. Focusing on achieving an acceptable ratio of true- and false-positives, we show that false discoveries could be eliminated much more efficiently using a stepwise approach. To avoid a relatively high false discovery rate, corrections for 'multiple testing' might also be needed in candidate gene studies. If the appropriate methods are used, detecting the proportion of true effects appears to be a more important determinant of the genotyping burden than the desired false discovery rate. This raises the question of whether current models for gene discovery are shaped excessively by a fear of false discoveries.  相似文献   

10.
This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP (q,g) = Pr(g (V(n),S(n)) > q), and generalized expected value (gEV) error rates, gEV (g) = E [g (V(n),S(n))], for arbitrary functions g (V(n),S(n)) of the numbers of false positives V(n) and true positives S(n). Of particular interest are error rates based on the proportion g (V(n),S(n)) = V(n) /(V(n) + S(n)) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E [V(n) /(V(n) + S(n))]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure.  相似文献   

11.
Controlling the false discovery rate (FDR) has been proposed as an alternative to controlling the genome-wise error rate (GWER) for detecting quantitative trait loci (QTL) in genome scans. The objective here was to implement FDR in the context of regression interval mapping for multiple traits. Data on five traits from an F2 swine breed cross were used. FDR was implemented using tests at every 1 cM (FDR1) and using tests with the highest test statistic for each marker interval (FDRm). For the latter, a method was developed to predict comparison-wise error rates. At low error rates, FDR1 behaved erratically; FDRm was more stable but gave similar significance thresholds and number of QTL detected. At the same error rate, methods to control FDR gave less stringent significance thresholds and more QTL detected than methods to control GWER. Although testing across traits had limited impact on FDR, single-trait testing was recommended because there is no theoretical reason to pool tests across traits for FDR. FDR based on FDRm was recommended for QTL detection in interval mapping because it provides significance tests that are meaningful, yet not overly stringent, such that a more complete picture of QTL is revealed.  相似文献   

12.
Currently, mapping genes for complex human traits relies on two complementary approaches, linkage and association analyses. Both suffer from several methodological and theoretical limitations, which can considerably increase the type-1 error rate and reduce the power to map human quantitative trait loci (QTL). This review focuses on linkage methods for QTL mapping. It summarizes the most common linkage statistics used, namely Haseman-Elston-based methods, variance components, and statistics that condition on trait values. Methods developed more recently that accommodate the X-chromosome, parental imprinting and allelic association in linkage analysis are also summarized. The type-I error rate and power of these methods are discussed. Finally, rough guidelines are provided to help guide the choice of linkage statistics.  相似文献   

13.
Summary In segregating populations, large numbers of individuals are needed to detect linkage between markers, such as restriction fragment length polymorphisms (RFLPs), and quantitative trait loci (QTL), limiting the potential use of such markers for detecting linkage. Fewer individuals from inbred lines are needed to detect linkage. Simulation data were used to test the utility of two methods to detect linkage: maximum likelihood and comparison of marker genotype means. When there is tight linkage, the two methods have similar power, but when there is loose linkage, maximum likelihood is much more powerful. Once inbred lines have been established, they can be screened rapidly to detect QTL for several traits simultaneously. If there is sufficient coverage of the genome with RFLPs, several QTL for each trait may be detected.  相似文献   

14.
Signal detection in functional magnetic resonance imaging (fMRI) inherently involves the problem of testing a large number of hypotheses. A popular strategy to address this multiplicity is the control of the false discovery rate (FDR). In this work we consider the case where prior knowledge is available to partition the set of all hypotheses into disjoint subsets or families, e. g., by a-priori knowledge on the functionality of certain regions of interest. If the proportion of true null hypotheses differs between families, this structural information can be used to increase statistical power. We propose a two-stage multiple test procedure which first excludes those families from the analysis for which there is no strong evidence for containing true alternatives. We show control of the family-wise error rate at this first stage of testing. Then, at the second stage, we proceed to test the hypotheses within each non-excluded family and obtain asymptotic control of the FDR within each family at this second stage. Our main mathematical result is that this two-stage strategy implies asymptotic control of the FDR with respect to all hypotheses. In simulations we demonstrate the increased power of this new procedure in comparison with established procedures in situations with highly unbalanced families. Finally, we apply the proposed method to simulated and to real fMRI data.  相似文献   

15.
The discovery of markers linked to genes that are responsible for traits of interest to the dairy industry might prove useful because they could aid in selection and breeding decisions. We have developed a selective DNA pooling methodology to allow us to efficiently screen the bovine genome in order to find genes responsible for production traits. Using markers on chromosome 14 as a test case, we identified a gene (DGAT1) previously known to affect three traits (fat yield, protein yield and total milk yield). Furthermore, we predicted similar effects to those previously shown for DGAT1 in a New Zealand Holstein-Friesian herd. Additionally, we showed a low error rate (1.6%) for the pooling procedure. Hence we are confident that we can apply this procedure to an entire genome scan in the search for quantitative trait loci (QTL).  相似文献   

16.
Tsai CA  Hsueh HM  Chen JJ 《Biometrics》2003,59(4):1071-1081
Testing for significance with gene expression data from DNA microarray experiments involves simultaneous comparisons of hundreds or thousands of genes. If R denotes the number of rejections (declared significant genes) and V denotes the number of false rejections, then V/R, if R > 0, is the proportion of false rejected hypotheses. This paper proposes a model for the distribution of the number of rejections and the conditional distribution of V given R, V / R. Under the independence assumption, the distribution of R is a convolution of two binomials and the distribution of V / R has a noncentral hypergeometric distribution. Under an equicorrelated model, the distributions are more complex and are also derived. Five false discovery rate probability error measures are considered: FDR = E(V/R), pFDR = E(V/R / R > 0) (positive FDR), cFDR = E(V/R / R = r) (conditional FDR), mFDR = E(V)/E(R) (marginal FDR), and eFDR = E(V)/r (empirical FDR). The pFDR, cFDR, and mFDR are shown to be equivalent under the Bayesian framework, in which the number of true null hypotheses is modeled as a random variable. We present a parametric and a bootstrap procedure to estimate the FDRs. Monte Carlo simulations were conducted to evaluate the performance of these two methods. The bootstrap procedure appears to perform reasonably well, even when the alternative hypotheses are correlated (rho = .25). An example from a toxicogenomic microarray experiment is presented for illustration.  相似文献   

17.
Summary A new methodology is proposed for estimating the proportion of true null hypotheses in a large collection of tests. Each test concerns a single parameter δ whose value is specified by the null hypothesis. We combine a parametric model for the conditional cumulative distribution function (CDF) of the p‐value given δ with a nonparametric spline model for the density g(δ) of δ under the alternative hypothesis. The proportion of true null hypotheses and the coefficients in the spline model are estimated by penalized least squares subject to constraints that guarantee that the spline is a density. The estimator is computed efficiently using quadratic programming. Our methodology produces an estimate of the density of δ when the null is false and can address such questions as “when the null is false, is the parameter usually close to the null or far away?” This leads us to define a falsely interesting discovery rate (FIDR), a generalization of the false discovery rate. We contrast the FIDR approach to Efron's (2004, Journal of the American Statistical Association 99, 96–104) empirical null hypothesis technique. We discuss the use of in sample size calculations based on the expected discovery rate (EDR). Our recommended estimator of the proportion of true nulls has less bias compared to estimators based upon the marginal density of the p‐values at 1. In a simulation study, we compare our estimators to the convex, decreasing estimator of Langaas, Lindqvist, and Ferkingstad (2005, Journal of the Royal Statistical Society, Series B 67, 555–572). The most biased of our estimators is very similar in performance to the convex, decreasing estimator. As an illustration, we analyze differences in gene expression between resistant and susceptible strains of barley.  相似文献   

18.
Quantitative trait loci (QTLs) affecting plant height and flowering were studied in the two Saccharum species from which modern sugarcane cultivars are derived. Two segregating populations derived from interspecific crosses between Saccharum officinarum and Saccharum spontaneum were genotyped with 735 DNA markers. Among the 65 significant associations found between these two traits and DNA markers, 35 of the loci were linked to sugarcane genetic maps and 30 were unlinked DNA markers. Twenty-one of the 35 mapped QTLs were clustered in eight genomic regions of six sugarcane homologous groups. Some of these could be divergent alleles at homologous loci, making the actual number of genes implicated in these traits much less than 35. Four QTL clusters controlling plant height in sugarcane corresponded closely to four of the six plant-height QTLs previously mapped in sorghum. One QTL controlling flowering in sugarcane corresponded to one of three flowering QTLs mapped in sorghum. The correspondence in locations of QTLs affecting plant height and flowering in sugarcane and sorghum reinforce the notion that the simple sorghum genome is a valuable "template" for molecular dissection of the much more complex sugarcane genome.  相似文献   

19.
MOTIVATION: Most biological traits may be correlated with the underlying gene expression patterns that are partially determined by DNA sequence variation. The correlations between gene expressions and quantitative traits are essential for understanding the functions of genes and dissecting gene regulatory networks. RESULTS: In the present study, we adopted a novel statistical method, called the stochastic expectation and maximization (SEM) algorithm, to analyze the associations between gene expression levels and quantitative trait values and identify genetic loci controlling the gene expression variations. In the first step, gene expression levels measured from microarray experiments were assigned to two different clusters based on the strengths of their association with the phenotypes of a quantitative trait under investigation. In the second step, genes associated with the trait were mapped to genetic loci of the genome. Because gene expressions are quantitative, the genetic loci controlling the expression traits are called expression quantitative trait loci. We applied the same SEM algorithm to a real dataset collected from a barley genetic experiment with both quantitative traits and gene expression traits. For the first time, we identified genes associated with eight agronomy traits of barley. These genes were then mapped to seven chromosomes of the barley genome. The SEM algorithm and the result of the barley data analysis are useful to scientists in the areas of bioinformatics and plant breeding. Availability and implementation: The R program for the SEM algorithm can be downloaded from our website: http://www.statgen.ucr.edu.  相似文献   

20.
Motivated by the genomic application of expression quantitative trait loci (eQTL) mapping, we propose a new procedure to perform simultaneous testing of multiple hypotheses using Bayes factors as input test statistics. One of the most significant features of this method is its robustness in controlling the targeted false discovery rate even under misspecifications of parametric alternative models. Moreover, the proposed procedure is highly computationally efficient, which is ideal for treating both complex system and big data in genomic applications. We discuss the theoretical properties of the new procedure and demonstrate its power and computational efficiency in applications of single-tissue and multi-tissue eQTL mapping.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号