首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
2.
Controlling the false discovery rate (FDR) has been proposed as an alternative to controlling the genome-wise error rate (GWER) for detecting quantitative trait loci (QTL) in genome scans. The objective here was to implement FDR in the context of regression interval mapping for multiple traits. Data on five traits from an F2 swine breed cross were used. FDR was implemented using tests at every 1 cM (FDR1) and using tests with the highest test statistic for each marker interval (FDRm). For the latter, a method was developed to predict comparison-wise error rates. At low error rates, FDR1 behaved erratically; FDRm was more stable but gave similar significance thresholds and number of QTL detected. At the same error rate, methods to control FDR gave less stringent significance thresholds and more QTL detected than methods to control GWER. Although testing across traits had limited impact on FDR, single-trait testing was recommended because there is no theoretical reason to pool tests across traits for FDR. FDR based on FDRm was recommended for QTL detection in interval mapping because it provides significance tests that are meaningful, yet not overly stringent, such that a more complete picture of QTL is revealed.  相似文献   

3.
J I Weller  J Z Song  D W Heyen  H A Lewin  M Ron 《Genetics》1998,150(4):1699-1706
Saturated genetic marker maps are being used to map individual genes affecting quantitative traits. Controlling the "experimentwise" type-I error severely lowers power to detect segregating loci. For preliminary genome scans, we propose controlling the "false discovery rate," that is, the expected proportion of true null hypotheses within the class of rejected null hypotheses. Examples are given based on a granddaughter design analysis of dairy cattle and simulated backcross populations. By controlling the false discovery rate, power to detect true effects is not dependent on the number of tests performed. If no detectable genes are segregating, controlling the false discovery rate is equivalent to controlling the experimentwise error rate. If quantitative loci are segregating in the population, statistical power is increased as compared to control of the experimentwise type-I error. The difference between the two criteria increases with the increase in the number of false null hypotheses. The false discovery rate can be controlled at the same level whether the complete genome or only part of it has been analyzed. Additional levels of contrasts, such as multiple traits or pedigrees, can be handled without the necessity of a proportional decrease in the critical test probability.  相似文献   

4.
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth''s ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth''s parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth''s parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.  相似文献   

5.
Coordinate based meta-analysis (CBMA) is widely used to find regions of consistent activation across fMRI studies that have been selected for their functional relevance to a given hypothesis. Only reported coordinates (foci), and a model of their spatial uncertainty, are used in the analysis. Results are clusters of foci where multiple studies have reported in the same spatial region, indicating functional relevance. There are several published methods that perform the analysis in a voxel-wise manner, resulting in around 105 statistical tests, and considerable emphasis placed on controlling the risk of type 1 statistical error. Here we address this issue by dramatically reducing the number of tests, and by introducing a new false discovery rate control: the false cluster discovery rate (FCDR). FCDR is particularly interpretable and relevant to the results of CBMA, controlling the type 1 error by limiting the proportion of clusters that are expected under the null hypothesis. We also introduce a data diagnostic scheme to help ensure quality of the analysis, and demonstrate its use in the example studies. We show that we control the false clusters better than the widely used ALE method by performing numerical experiments, and that our clustering scheme results in more complete reporting of structures relevant to the functional task.  相似文献   

6.

Background

Artificial selection for economically important traits in cattle is expected to have left distinctive selection signatures on the genome. Access to high-density genotypes facilitates the accurate identification of genomic regions that have undergone positive selection. These findings help to better elucidate the mechanisms of selection and to identify candidate genes of interest to breeding programs.

Results

Information on 705 243 autosomal single nucleotide polymorphisms (SNPs) in 3122 dairy and beef male animals from seven cattle breeds (Angus, Belgian Blue, Charolais, Hereford, Holstein-Friesian, Limousin and Simmental) were used to detect selection signatures by applying two complementary methods, integrated haplotype score (iHS) and global fixation index (FST). To control for false positive results, we used false discovery rate (FDR) adjustment to calculate adjusted iHS within each breed and the genome-wide significance level was about 0.003. Using the iHS method, 83, 92, 91, 101, 85, 101 and 86 significant genomic regions were detected for Angus, Belgian Blue, Charolais, Hereford, Holstein-Friesian, Limousin and Simmental cattle, respectively. None of these regions was common to all seven breeds. Using the FST approach, 704 individual SNPs were detected across breeds. Annotation of the regions of the genome that showed selection signatures revealed several interesting candidate genes i.e. DGAT1, ABCG2, MSTN, CAPN3, FABP3, CHCHD7, PLAG1, JAZF1, PRKG2, ACTC1, TBC1D1, GHR, BMP2, TSG1, LYN, KIT and MC1R that play a role in milk production, reproduction, body size, muscle formation or coat color. Fifty-seven common candidate genes were found by both the iHS and global FST methods across the seven breeds. Moreover, many novel genomic regions and genes were detected within the regions that showed selection signatures; for some candidate genes, signatures of positive selection exist in the human genome. Multilevel bioinformatic analyses of the detected candidate genes suggested that the PPAR pathway may have been subjected to positive selection.

Conclusions

This study provides a high-resolution bovine genomic map of positive selection signatures that are either specific to one breed or common to a subset of the seven breeds analyzed. Our results will contribute to the detection of functional candidate genes that have undergone positive selection in future studies.

Electronic supplementary material

The online version of this article (doi:10.1186/s12711-015-0127-3) contains supplementary material, which is available to authorized users.  相似文献   

7.
There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron''s Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.  相似文献   

8.
The recent availability of next‐generation sequencing (NGS) has made possible the use of dense genetic markers to identify regions of the genome that may be under the influence of selection. Several statistical methods have been developed recently for this purpose. Here, we present the results of an individual‐based simulation study investigating the power and error rate of popular or recent genome scan methods: linear regression, Bayescan, BayEnv and LFMM. Contrary to previous studies, we focus on complex, hierarchical population structure and on polygenic selection. Additionally, we use a false discovery rate (FDR)‐based framework, which provides an unified testing framework across frequentist and Bayesian methods. Finally, we investigate the influence of population allele frequencies versus individual genotype data specification for LFMM and the linear regression. The relative ranking between the methods is impacted by the consideration of polygenic selection, compared to a monogenic scenario. For strongly hierarchical scenarios with confounding effects between demography and environmental variables, the power of the methods can be very low. Except for one scenario, Bayescan exhibited moderate power and error rate. BayEnv performance was good under nonhierarchical scenarios, while LFMM provided the best compromise between power and error rate across scenarios. We found that it is possible to greatly reduce error rates by considering the results of all three methods when identifying outlier loci.  相似文献   

9.
10.
11.
Genome-wide association studies have been instrumental in identifying genetic variants associated with complex traits such as human disease or gene expression phenotypes. It has been proposed that extending existing analysis methods by considering interactions between pairs of loci may uncover additional genetic effects. However, the large number of possible two-marker tests presents significant computational and statistical challenges. Although several strategies to detect epistasis effects have been proposed and tested for specific phenotypes, so far there has been no systematic attempt to compare their performance using real data. We made use of thousands of gene expression traits from linkage and eQTL studies, to compare the performance of different strategies. We found that using information from marginal associations between markers and phenotypes to detect epistatic effects yielded a lower false discovery rate (FDR) than a strategy solely using biological annotation in yeast, whereas results from human data were inconclusive. For future studies whose aim is to discover epistatic effects, we recommend incorporating information about marginal associations between SNPs and phenotypes instead of relying solely on biological annotation. Improved methods to discover epistatic effects will result in a more complete understanding of complex genetic effects.  相似文献   

12.
Controlling the proportion of false positives in multiple dependent tests   总被引:4,自引:0,他引:4  
Genome scan mapping experiments involve multiple tests of significance. Thus, controlling the error rate in such experiments is important. Simple extension of classical concepts results in attempts to control the genomewise error rate (GWER), i.e., the probability of even a single false positive among all tests. This results in very stringent comparisonwise error rates (CWER) and, consequently, low experimental power. We here present an approach based on controlling the proportion of false positives (PFP) among all positive test results. The CWER needed to attain a desired PFP level does not depend on the correlation among the tests or on the number of tests as in other approaches. To estimate the PFP it is necessary to estimate the proportion of true null hypotheses. Here we show how this can be estimated directly from experimental results. The PFP approach is similar to the false discovery rate (FDR) and positive false discovery rate (pFDR) approaches. For a fixed CWER, we have estimated PFP, FDR, pFDR, and GWER through simulation under a variety of models to illustrate practical and philosophical similarities and differences among the methods.  相似文献   

13.
14.
Magnetoencephalography (MEG) has recently revealed that the transitions between the parietal operculum (Pop) and the insula (area G) and the ventral end of the central sulcus (cs) were activated with the shortest latency by instrumental gustatory stimulation, which suggests that the location of the primary gustatory area is in these two regions. However, studies using other noninvasive brain-imaging methods such as positron-emission tomography or functional magnetic resonance imaging (fMRI) with manual application of tastants into the mouth have been unable to confirm this. The present study examined cortical activation by repetitive stimulation of the tongue tip with 1 M NaCl with a computer-controlled stimulator and used fMRI to detect it. In individual brains, activations were detected with multiple comparisons (false discovery rate) across the whole brain corrected (threshold at P < 0.05) at both area G and frontal operculum (Fop) in 8 of 11 subjects and at the rolandic operculum (Rop) in 7 subjects. Activations were also found at the ventral end of the cs (n = 3). Group analysis with random-effect models (multiple comparison using familywise error in regions of interest, P < 0.02) revealed activation at area G in both hemispheres and in the Fop, Rop, and ventral end of the cs on the left side. The present study revealed no activation on the gyrus of the external cerebral surface except for the Rop. Taking MEG findings into consideration, the present findings strongly indicate that the primary gustatory area is present at both the transition between the Pop and insula and the Rop including the gray matter within a ventral part of the cs.  相似文献   

15.
16.
A common goal of microarray and related high-throughput genomic experiments is to identify genes that vary across biological condition. Most often this is accomplished by identifying genes with changes in mean expression level, so called differentially expressed (DE) genes, and a number of effective methods for identifying DE genes have been developed. Although useful, these approaches do not accommodate other types of differential regulation. An important example concerns differential coexpression (DC). Investigations of this class of genes are hampered by the large cardinality of the space to be interrogated as well as by influential outliers. As a result, existing DC approaches are often underpowered, exceedingly prone to false discoveries, and/or computationally intractable for even a moderately large number of pairs. To address this, an empirical Bayesian approach for identifying DC gene pairs is developed. The approach provides a false discovery rate controlled list of significant DC gene pairs without sacrificing power. It is applicable within a single study as well as across multiple studies. Computations are greatly facilitated by a modification to the expectation-maximization algorithm and a procedural heuristic. Simulations suggest that the proposed approach outperforms existing methods in far less computational time; and case study results suggest that the approach will likely prove to be a useful complement to current DE methods in high-throughput genomic studies.  相似文献   

17.
MOTIVATION: Multiple hypothesis testing is a common problem in genome research, particularly in microarray experiments and genomewide association studies. Failure to account for the effects of multiple comparisons would result in an abundance of false positive results. The Bonferroni correction and Holm's step-down procedure are overly conservative, whereas the permutation test is time-consuming and is restricted to simple problems. RESULTS: We developed an efficient Monte Carlo approach to approximating the joint distribution of the test statistics along the genome. We then used the Monte Carlo distribution to evaluate the commonly used criteria for error control, such as familywise error rates and positive false discovery rates. This approach is applicable to any data structures and test statistics. Applications to simulated and real data demonstrate that the proposed approach provides accurate error control, and can be substantially more powerful than the Bonferroni and Holm methods, especially when the test statistics are highly correlated.  相似文献   

18.
DNA-encoded library (DEL) technology is a powerful tool for small molecule identification in drug discovery, yet the reported DEL selection strategies were applied primarily on protein targets in either purified form or in cellular context. To expand the application of this technology, we employed DEL selection on an RNA target HIV-1 TAR (trans-acting responsive region), but found that the majority of signals were resulted from false positive DNA–RNA binding. We thus developed an optimized selection strategy utilizing RNA patches and competitive elution to minimize unwanted DNA binding, followed by k-mer analysis and motif search to differentiate false positive signal. This optimized strategy resulted in a very clean background in a DEL selection against Escherichia coli FMN Riboswitch, and the enriched compounds were determined with double digit nanomolar binding affinity, as well as similar potency in functional FMN competition assay. These results demonstrated the feasibility of small molecule identification against RNA targets using DEL selection. The developed experimental and computational strategy provided a promising opportunity for RNA ligand screening and expanded the application of DEL selection to a much wider context in drug discovery.  相似文献   

19.
Predicting absolute protein–ligand binding affinities remains a frontier challenge in ligand discovery and design. This becomes more difficult when ionic interactions are involved because of the large opposing solvation and electrostatic attraction energies. In a blind test, we examined whether alchemical free-energy calculations could predict binding affinities of 14 charged and 5 neutral compounds previously untested as ligands for a cavity binding site in cytochrome c peroxidase. In this simplified site, polar and cationic ligands compete with solvent to interact with a buried aspartate. Predictions were tested by calorimetry, spectroscopy, and crystallography. Of the 15 compounds predicted to bind, 13 were experimentally confirmed, while 4 compounds were false negative predictions. Predictions had a root-mean-square error of 1.95 kcal/mol to the experimental affinities, and predicted poses had an average RMSD of 1.7 Å to the crystallographic poses. This test serves as a benchmark for these thermodynamically rigorous calculations at predicting binding affinities for charged compounds and gives insights into the existing sources of error, which are primarily electrostatic interactions inside proteins. Our experiments also provide a useful set of ionic binding affinities in a simplified system for testing new affinity prediction methods.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号