首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The most commonly used method in evolutionary biology for combining information across multiple tests of the same null hypothesis is Fisher's combined probability test. This note shows that an alternative method called the weighted Z-test has more power and more precision than does Fisher's test. Furthermore, in contrast to some statements in the literature, the weighted Z-method is superior to the unweighted Z-transform approach. The results in this note show that, when combining P-values from multiple tests of the same hypothesis, the weighted Z-method should be preferred.  相似文献   

2.
Through simulation, Whitlock showed that when all the alternatives have the same effect size, the weighted z-test is superior to both unweighted z-test and Fisher's method when combining P-values from independent studies. In this paper, we show that under the same situation, the generalized Fisher method due to Lancaster outperforms the weighted z-test.  相似文献   

3.
Alves G  Yu YK 《PloS one》2011,6(8):e22647
Given the expanding availability of scientific data and tools to analyze them, combining different assessments of the same piece of information has become increasingly important for social, biological, and even physical sciences. This task demands, to begin with, a method-independent standard, such as the P-value, that can be used to assess the reliability of a piece of information. Good's formula and Fisher's method combine independent P-values with respectively unequal and equal weights. Both approaches may be regarded as limiting instances of a general case of combining P-values from m groups; P-values within each group are weighted equally, while weight varies by group. When some of the weights become nearly degenerate, as cautioned by Good, numeric instability occurs in computation of the combined P-values. We deal explicitly with this difficulty by deriving a controlled expansion, in powers of differences in inverse weights, that provides both accurate statistics and stable numerics. We illustrate the utility of this systematic approach with a few examples. In addition, we also provide here an alternative derivation for the probability distribution function of the general case and show how the analytic formula obtained reduces to both Good's and Fisher's methods as special cases. A C++ program, which computes the combined P-values with equal numerical stability regardless of whether weights are (nearly) degenerate or not, is available for download at our group website http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/CoinedPValues.html.  相似文献   

4.
Information on statistical power is critical when planning investigations and evaluating empirical data, but actual power estimates are rarely presented in population genetic studies. We used computer simulations to assess and evaluate power when testing for genetic differentiation at multiple loci through combining test statistics or P values obtained by four different statistical approaches, viz. Pearson's chi-square, the log-likelihood ratio G-test, Fisher's exact test, and an F(ST)-based permutation test. Factors considered in the comparisons include the number of samples, their size, and the number and type of genetic marker loci. It is shown that power for detecting divergence may be substantial for frequently used sample sizes and sets of markers, also at quite low levels of differentiation. The choice of statistical method may be critical, though. For multi-allelic loci such as microsatellites, combining exact P values using Fisher's method is robust and generally provides a high resolving power. In contrast, for few-allele loci (e.g. allozymes and single nucleotide polymorphisms) and when making pairwise sample comparisons, this approach may yield a remarkably low power. In such situations chi-square typically represents a better alternative. The G-test without Williams's correction frequently tends to provide an unduly high proportion of false significances, and results from this test should be interpreted with great care. Our results are not confined to population genetic analyses but applicable to contingency testing in general.  相似文献   

5.
In combining several tests of significance the individual test statistics are allowed to be stochastically dependent. By choosing the weighted inverse normal method for the combination, the dependency of the original test statistics is then characterized by a correlation of the transformed statistics. For this correlation a confidence region, an unbiased estimator and an unbiased estimate of its variance are derived. The combined test statistic is extended to include the case of possibly dependent original test statistics. Simulation studies show the performance of the actual significance level.  相似文献   

6.
Flexible designs are provided by adaptive planning of sample sizes as well as by introducing the weighted inverse normal combining method and the generalized inverse chi-square combining method in the context of conducting trials consecutively step by step. These general combining methods allow quite different weighting of sequential study parts, also in a completely adaptive way, based on full information from unblinded data in previously performed stages. So, in reviewing some basic developments of flexible designing, we consider a generalizing approach to group sequentially performed clinical trials of Pocock-type, of O'Brien-Fleming-type, and of Self-designing-type. A clinical trial may be originally planned either to show non-inferiority or superiority. The proposed flexible designs, however, allow in each interim analysis to change the planning from showing non-inferiority to showing superiority and vice versa. Several examples of clinical trials with normal and binary outcomes are worked out in detail. We demonstrate the practicable performance of the discussed approaches, confirmed in an extensive simulation study. Our flexible designing is a useful tool, provided that a priori information about parameters involved in the trial is not available or subject to uncertainty.  相似文献   

7.
A weighted inverse normal procedure for combining independent events is considered. The procedure is utilized to combine results from six clinical studies on homovanillic acid concentrations in relation to the hypothesis that schizophrenia is the result of dopaminergic hyperactivity.  相似文献   

8.
MOTIVATION: Current methods for multiplicity adjustment do not make use of the graph structure of Gene Ontology (GO) when testing for association of expression profiles of GO terms with a response variable. RESULTS: We propose a multiple testing method, called the focus level procedure, that preserves the graph structure of Gene Ontology (GO). The procedure is constructed as a combination of a Closed Testing procedure with Holm's method. It requires a user to choose a 'focus level' in the GO graph, which reflects the level of specificity of terms in which the user is most interested. This choice also determines the level in the GO graph at which the procedure has most power. We prove that the procedure strongly controls the family-wise error rate without any additional assumptions on the joint distribution of the test statistics used. We also present an algorithm to calculate multiplicity-adjusted P-values. Because the focus level procedure preserves the structure of the GO graph, it does not generally preserve the ordering of the raw P-values in the adjusted P-values. AVAILABILITY: The focus level procedure has been implemented in the globaltest and GlobalAncova packages, both of which are available on www.bioconductor.org.  相似文献   

9.
This paper applies the inverse probability weighted least‐squares method to predict total medical cost in the presence of censored data. Since survival time and medical costs may be subject to right censoring and therefore are not always observable, the ordinary least‐squares approach cannot be used to assess the effects of explanatory variables. We demonstrate how inverse probability weighted least‐squares estimation provides consistent asymptotic normal coefficients with easily computable standard errors. In addition, to assess the effect of censoring on coefficients, we develop a test comparing ordinary least‐squares and inverse probability weighted least‐squares estimators. We demonstrate the methods developed by applying them to the estimation of cancer costs using Medicare claims data. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
Zhang K  Traskin M  Small DS 《Biometrics》2012,68(1):75-84
For group-randomized trials, randomization inference based on rank statistics provides robust, exact inference against nonnormal distributions. However, in a matched-pair design, the currently available rank-based statistics lose significant power compared to normal linear mixed model (LMM) test statistics when the LMM is true. In this article, we investigate and develop an optimal test statistic over all statistics in the form of the weighted sum of signed Mann-Whitney-Wilcoxon statistics under certain assumptions. This test is almost as powerful as the LMM even when the LMM is true, but it is much more powerful for heavy tailed distributions. A simulation study is conducted to examine the power.  相似文献   

11.
Summary .  Pharmacovigilance systems aim at early detection of adverse effects of marketed drugs. They maintain large spontaneous reporting databases for which several automatic signaling methods have been developed. One limit of those methods is that the decision rules for the signal generation are based on arbitrary thresholds. In this article, we propose a new signal-generation procedure. The decision criterion is formulated in terms of a critical region for the P-values resulting from the reporting odds ratio method as well as from the Fisher's exact test. For the latter, we also study the use of mid-P-values. The critical region is defined by the false discovery rate, which can be estimated by adapting the P-values mixture model based procedures to one-sided tests. The methodology is mainly illustrated with the location-based estimator procedure. It is studied through a large simulation study and applied to the French pharmacovigilance database.  相似文献   

12.
Adaptive sample size calculations in group sequential trials   总被引:4,自引:0,他引:4  
Lehmacher W  Wassmer G 《Biometrics》1999,55(4):1286-1290
A method for group sequential trials that is based on the inverse normal method for combining the results of the separate stages is proposed. Without exaggerating the Type I error rate, this method enables data-driven sample size reassessments during the course of the study. It uses the stopping boundaries of the classical group sequential tests. Furthermore, exact test procedures may be derived for a wide range of applications. The procedure is compared with the classical designs in terms of power and expected sample size.  相似文献   

13.
Multiple lines of evidence (LOE) are often considered when examining the potential impact of contaminated sediment. Three strategies are explored for combining information within and/or among different LOE. One technique uses a multivariate strategy for clustering sites into groups of similar impact. A second method employs meta-analysis to pool empirically derived P-values. The third method uses a quantitative estimation of probability derived from odds ratios. These three strategies are compared with respect to a set of data describing reference conditions and a contaminated area in the Great Lakes. Common themes in these three strategies include the critical issue of defining an appropriate set of reference/control conditions, the definition of impact as a significant departure from the normal variation observed in the reference conditions, and the use of distance from the reference distribution to define any of the effect measures. Reasons for differences in results between the three approaches are explored and strategies for improving the approaches are suggested.  相似文献   

14.
This article investigates an augmented inverse selection probability weighted estimator for Cox regression parameter estimation when covariate variables are incomplete. This estimator extends the Horvitz and Thompson (1952, Journal of the American Statistical Association 47, 663-685) weighted estimator. This estimator is doubly robust because it is consistent as long as either the selection probability model or the joint distribution of covariates is correctly specified. The augmentation term of the estimating equation depends on the baseline cumulative hazard and on a conditional distribution that can be implemented by using an EM-type algorithm. This method is compared with some previously proposed estimators via simulation studies. The method is applied to a real example.  相似文献   

15.
Mehrotra DV  Chan IS  Berger RL 《Biometrics》2003,59(2):441-450
Fisher's exact test for comparing response proportions in a randomized experiment can be overly conservative when the group sizes are small or when the response proportions are close to zero or one. This is primarily because the null distribution of the test statistic becomes too discrete, a partial consequence of the inference being conditional on the total number of responders. Accordingly, exact unconditional procedures have gained in popularity, on the premise that power will increase because the null distribution of the test statistic will presumably be less discrete. However, we caution researchers that a poor choice of test statistic for exact unconditional inference can actually result in a substantially less powerful analysis than Fisher's conditional test. To illustrate, we study a real example and provide exact test size and power results for several competing tests, for both balanced and unbalanced designs. Our results reveal that Fisher's test generally outperforms exact unconditional tests based on using as the test statistic either the observed difference in proportions, or the observed difference divided by its estimated standard error under the alternative hypothesis, the latter for unbalanced designs only. On the other hand, the exact unconditional test based on the observed difference divided by its estimated standard error under the null hypothesis (score statistic) outperforms Fisher's test, and is recommended. Boschloo's test, in which the p-value from Fisher's test is used as the test statistic in an exact unconditional test, is uniformly more powerful than Fisher's test, and is also recommended.  相似文献   

16.
MOTIVATION: A number of available program packages determine the significant enrichments and/or depletions of GO categories among a class of genes of interest. Whereas a correct formulation of the problem leads to a single exact null distribution, these GO tools use a large variety of statistical tests whose denominations often do not clarify the underlying P-value computations. SUMMARY: We review the different formulations of the problem and the tests they lead to: the binomial, chi2, equality of two probabilities, Fisher's exact and hypergeometric tests. We clarify the relationships existing between these tests, in particular the equivalence between the hypergeometric test and Fisher's exact test. We recall that the other tests are valid only for large samples, the test of equality of two probabilities and the chi2-test being equivalent. We discuss the appropriateness of one- and two-sided P-values, as well as some discreteness and conservatism issues. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

17.
Liu D  Zhou XH 《Biometrics》2011,67(3):906-916
Covariate-specific receiver operating characteristic (ROC) curves are often used to evaluate the classification accuracy of a medical diagnostic test or a biomarker, when the accuracy of the test is associated with certain covariates. In many large-scale screening tests, the gold standard is subject to missingness due to high cost or harmfulness to the patient. In this article, we propose a semiparametric estimation of the covariate-specific ROC curves with a partial missing gold standard. A location-scale model is constructed for the test result to model the covariates' effect, but the residual distributions are left unspecified. Thus the baseline and link functions of the ROC curve both have flexible shapes. With the gold standard missing at random (MAR) assumption, we consider weighted estimating equations for the location-scale parameters, and weighted kernel estimating equations for the residual distributions. Three ROC curve estimators are proposed and compared, namely, imputation-based, inverse probability weighted, and doubly robust estimators. We derive the asymptotic normality of the estimated ROC curve, as well as the analytical form of the standard error estimator. The proposed method is motivated and applied to the data in an Alzheimer's disease research.  相似文献   

18.
Bennewitz J  Reinsch N  Kalm E 《Genetics》2002,160(4):1673-1686
The nonparametric bootstrap approach is known to be suitable for calculating central confidence intervals for the locations of quantitative trait loci (QTL). However, the distribution of the bootstrap QTL position estimates along the chromosome is peaked at the positions of the markers and is not tailed equally. This results in conservativeness and large width of the confidence intervals. In this study three modified methods are proposed to calculate nonparametric bootstrap confidence intervals for QTL locations, which compute noncentral confidence intervals (uncorrected method I), correct for the impact of the markers (weighted method I), or both (weighted method II). Noncentral confidence intervals were computed with an analog of the highest posterior density method. The correction for the markers is based on the distribution of QTL estimates along the chromosome when the QTL is not linked with any marker, and it can be obtained with a permutation approach. In a simulation study the three methods were compared with the original bootstrap method. The results showed that it is useful, first, to compute noncentral confidence intervals and, second, to correct the bootstrap distribution of the QTL estimates for the impact of the markers. The weighted method II, combining these two properties, produced the shortest and less biased confidence intervals in a large number of simulated configurations.  相似文献   

19.
Dalmasso C  Génin E  Trégouet DA 《Genetics》2008,180(1):697-702
In the context of genomewide association studies where hundreds of thousand of polymorphisms are tested, stringent thresholds on the raw association test P-values are generally used to limit false-positive results. Instead of using thresholds based on raw P-values as in Bonferroni and sequential Sidak (SidakSD) corrections, we propose here to use a weighted-Holm procedure with weights depending on allele frequency of the polymorphisms. This method is shown to substantially improve the power to detect associations, in particular by favoring the detection of rare variants with high genetic effects over more frequent ones with lower effects.  相似文献   

20.
In the subwavelength regime, several nanophotonic configurations have been proposed to overcome the conventional light trapping or light absorption enhancement limit in solar cells also known as the Yablonovitch limit. It has been recently suggested that establishing such limit should rely on computational inverse electromagnetic design instead of the traditional approach combining intuition and a priori known physical effect. In the present work, by applying an inverse full wave vector electromagnetic computational approach, a 1D nanostructured optical cavity with a new resonance configuration is designed that provides an ultrabroadband (≈450 nm) light absorption enhancement when applied to a 107 nm thick active layer organic solar cell based on a low‐bandgap (1.32 eV) nonfullerene acceptor. It is demonstrated computationally and experimentally that the absorption enhancement provided by such a cavity surpasses the conventional limit resulting from an ergodic optical geometry by a 7% average over a 450 nm band and by more than 20% in the NIR. In such a cavity configuration the solar cells exhibit a maximum power conversion efficiency above 14%, corresponding to the highest ever measured for devices based on the specific nonfullerene acceptor used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号