首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 352 毫秒
1.
When analyzing clinical trials with a stratified population, homogeneity of treatment effects is a common assumption in survival analysis. However, in the context of recent developments in clinical trial design, which aim to test multiple targeted therapies in corresponding subpopulations simultaneously, the assumption that there is no treatment‐by‐stratum interaction seems inappropriate. It becomes an issue if the expected sample size of the strata makes it unfeasible to analyze the trial arms individually. Alternatively, one might choose as primary aim to prove efficacy of the overall (targeted) treatment strategy. When testing for the overall treatment effect, a violation of the no‐interaction assumption renders it necessary to deviate from standard methods that rely on this assumption. We investigate the performance of different methods for sample size calculation and data analysis under heterogeneous treatment effects. The commonly used sample size formula by Schoenfeld is compared to another formula by Lachin and Foulkes, and to an extension of Schoenfeld's formula allowing for stratification. Beyond the widely used (stratified) Cox model, we explore the lognormal shared frailty model, and a two‐step analysis approach as potential alternatives that attempt to adjust for interstrata heterogeneity. We carry out a simulation study for a trial with three strata and violations of the no‐interaction assumption. The extension of Schoenfeld's formula to heterogeneous strata effects provides the most reliable sample size with respect to desired versus actual power. The two‐step analysis and frailty model prove to be more robust against loss of power caused by heterogeneous treatment effects than the stratified Cox model and should be preferred in such situations.  相似文献   

2.
Ryman N  Jorde PE 《Molecular ecology》2001,10(10):2361-2373
A variety of statistical procedures are commonly employed when testing for genetic differentiation. In a typical situation two or more samples of individuals have been genotyped at several gene loci by molecular or biochemical means, and in a first step a statistical test for allele frequency homogeneity is performed at each locus separately, using, e.g. the contingency chi-square test, Fisher's exact test, or some modification thereof. In a second step the results from the separate tests are combined for evaluation of the joint null hypothesis that there is no allele frequency difference at any locus, corresponding to the important case where the samples would be regarded as drawn from the same statistical and, hence, biological population. Presently, there are two conceptually different strategies in use for testing the joint null hypothesis of no difference at any locus. One approach is based on the summation of chi-square statistics over loci. Another method is employed by investigators applying the Bonferroni technique (adjusting the P-value required for rejection to account for the elevated alpha errors when performing multiple tests simultaneously) to test if the heterogeneity observed at any particular locus can be regarded significant when considered separately. Under this approach the joint null hypothesis is rejected if one or more of the component single locus tests is considered significant under the Bonferroni criterion. We used computer simulations to evaluate the statistical power and realized alpha errors of these strategies when evaluating the joint hypothesis after scoring multiple loci. We find that the 'extended' Bonferroni approach generally is associated with low statistical power and should not be applied in the current setting. Further, and contrary to what might be expected, we find that 'exact' tests typically behave poorly when combined in existing procedures for joint hypothesis testing. Thus, while exact tests are generally to be preferred over approximate ones when testing each particular locus, approximate tests such as the traditional chi-square seem preferable when addressing the joint hypothesis.  相似文献   

3.
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.  相似文献   

4.
A new statistical testing approach is developed for rodent tumorigenicity assays that have a single terminal sacrifice or occasionally interim sacrifices but not cause‐of‐death data. For experiments that lack cause‐of‐death data, statistically imputed numbers of fatal tumors and incidental tumors are used to modify Peto's cause‐of‐death test which is usually implemented using pathologist‐assigned cause‐of‐death information. The numbers of fatal tumors are estimated using a constrained nonparametric maximum likelihood estimation method. A new Newton‐based approach under inequality constraints is proposed for finding the global maximum likelihood estimates. In this study, the proposed method is concentrated on data with a single sacrifice experiment without implementing further assumptions. The new testing approach may be more reliable than Peto's test because of the potential for a misclassification of cause‐of‐death by pathologists. A Monte Carlo simulation study for the proposed test is conducted to assess size and power of the test. Asymptotic normality for the statistic of the proposed test is also investigated. The proposed testing approach is illustrated using a real data set.  相似文献   

5.
Monte‐Carlo simulation methods are commonly used for assessing the performance of statistical tests under finite sample scenarios. They help us ascertain the nominal level for tests with approximate level, e.g. asymptotic tests. Additionally, a simulation can assess the quality of a test on the alternative. The latter can be used to compare new tests and established tests under certain assumptions in order to determinate a preferable test given characteristics of the data. The key problem for such investigations is the choice of a goodness criterion. We expand the expected p‐value as considered by Sackrowitz and Samuel‐Cahn (1999) to the context of univariate equivalence tests. This presents an effective tool to evaluate new purposes for equivalence testing because of its independence of the distribution of the test statistic under null‐hypothesis. It helps to avoid the often tedious search for the distribution under null‐hypothesis for test statistics which have no considerable advantage over yet available methods. To demonstrate the usefulness in biometry a comparison of established equivalence tests with a nonparametric approach is conducted in a simulation study for three distributional assumptions.  相似文献   

6.
Summary We present an adaptive percentile modified Wilcoxon rank sum test for the two‐sample problem. The test is basically a Wilcoxon rank sum test applied on a fraction of the sample observations, and the fraction is adaptively determined by the sample observations. Most of the theory is developed under a location‐shift model, but we demonstrate that the test is also meaningful for testing against more general alternatives. The test may be particularly useful for the analysis of massive datasets in which quasi‐automatic hypothesis testing is required. We investigate the power characteristics of the new test in a simulation study, and we apply the test to a microarray experiment on colorectal cancer. These empirical studies demonstrate that the new test has good overall power and that it succeeds better in finding differentially expressed genes as compared to other popular tests. We conclude that the new nonparametric test is widely applicable and that its power is comparable to the power of the Baumgartner‐Weiß‐Schindler test.  相似文献   

7.

Background  

The detection of true significant cases under multiple testing is becoming a fundamental issue when analyzing high-dimensional biological data. Unfortunately, known multitest adjustments reduce their statistical power as the number of tests increase. We propose a new multitest adjustment, based on a sequential goodness of fit metatest (SGoF), which increases its statistical power with the number of tests. The method is compared with Bonferroni and FDR-based alternatives by simulating a multitest context via two different kinds of tests: 1) one-sample t-test, and 2) homogeneity G-test.  相似文献   

8.
Brown RP 《Genetica》1997,101(1):67-74
Heterogeneous phenotypic correlations may be suggestive of underlying changes in genetic covariance among life-history, morphology, and behavioural traits, and their detection is therefore relevant to many biological studies. Two new statistical tests are proposed and their performances compared with existing methods. Of all tests considered, the existing approximate test of homogeneity of product-moment correlations provides the greatest power to detect heterogeneous correlations, when based on Hotelling's z*-transformation. The use of this transformation and test is recommended under conditions of bivariate normality. A new distribution-free randomisation test of homogeneity of Spearman's rank correlations is described and recommended for use when the bivariate samples are taken from populations with non-normal or unknown distributions. An alternative randomisation test of homogeneity of product-moment correlations is shown to be a useful compromise between the approximate tests and the randomisation tests on Spearman's rank correlations: it is not as sensitive to departures from normality as the approximate tests, but has greater power than the rank correlation test. An example is provided that shows how choice of test will have a considerable influence on the conclusions of a particular study. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

9.
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.  相似文献   

10.
Controlling for the multiplicity effect is an essential part of determining statistical significance in large-scale single-locus association genome scans on Single Nucleotide Polymorphisms (SNPs). Bonferroni adjustment is a commonly used approach due to its simplicity, but is conservative and has low power for large-scale tests. The permutation test, which is a powerful and popular tool, is computationally expensive and may mislead in the presence of family structure. We propose a computationally efficient and powerful multiple testing correction approach for Linkage Disequilibrium (LD) based Quantitative Trait Loci (QTL) mapping on the basis of graphical weighted-Bonferroni methods. The proposed multiplicity adjustment method synthesizes weighted Bonferroni-based closed testing procedures into a powerful and versatile graphical approach. By tailoring different priorities for the two hypothesis tests involved in LD based QTL mapping, we are able to increase power and maintain computational efficiency and conceptual simplicity. The proposed approach enables strong control of the familywise error rate (FWER). The performance of the proposed approach as compared to the standard Bonferroni correction is illustrated by simulation and real data. We observe a consistent and moderate increase in power under all simulated circumstances, among different sample sizes, heritabilities, and number of SNPs. We also applied the proposed method to a real outbred mouse HDL cholesterol QTL mapping project where we detected the significant QTLs that were highlighted in the literature, while still ensuring strong control of the FWER.  相似文献   

11.
Knowledge of statistical power is essential for sampling design and data evaluation when testing for genetic differentiation. Yet, such information is typically missing in studies of conservation and evolutionary genetics, most likely because of complex interactions between the many factors that affect power. powsim is a 32‐bit Windows/DOS simulation‐based computer program that estimates power (and α error) for chi‐square and Fisher's exact tests when evaluating the hypothesis of genetic homogeneity. Optional combinations include the number of samples, sample sizes, number of loci and alleles, allele frequencies, and degree of differentiation (quantified as FST). powsim is available at http://www.zoologi.su.se/~ryman .  相似文献   

12.
The classical normal-theory tests for testing the null hypothesis of common variance and the classical estimates of scale have long been known to be quite nonrobust to even mild deviations from normality assumptions for moderate sample sizes. Levene (1960) suggested a one-way ANOVA type statistic as a robust test. Brown and Forsythe (1974) considered a modified version of Levene's test by replacing the sample means with sample medians as estimates of population locations, and their test is computationally the simplest among the three tests recommended by Conover , Johnson , and Johnson (1981) in terms of robustness and power. In this paper a new robust and powerful test for homogeneity of variances is proposed based on a modification of Levene's test using the weighted likelihood estimates (Markatou , Basu , and Lindsay , 1996) of the population means. For two and three populations the proposed test using the Hellinger distance based weighted likelihood estimates is observed to achieve better empirical level and power than Brown-Forsythe's test in symmetric distributions having a thicker tail than the normal, and higher empirical power in skew distributions under the use of F distribution critical values.  相似文献   

13.
Delayed separation of survival curves is a common occurrence in confirmatory studies in immuno-oncology. Many novel statistical methods that aim to efficiently capture potential long-term survival improvements have been proposed in recent years. However, the vast majority do not consider stratification, which is a major limitation considering that most large confirmatory studies currently employ a stratified primary analysis. In this article, we combine recently proposed weighted log-rank tests that have been designed to work well under a delayed separation of survival curves, with stratification by a baseline variable. The aim is to increase the efficiency of the test when the stratifying variable is highly prognostic for survival. As there are many potential ways to combine the two techniques, we compare several possibilities in an extensive simulation study. We also apply the techniques retrospectively to two recent randomized clinical trials.  相似文献   

14.
The conditional exact tests of homogeneity of two binomial proportions are often used in small samples, because the exact tests guarantee to keep the size under the nominal level. The Fisher's exact test, the exact chi‐squared test and the exact likelihood ratio test are popular and can be implemented in software StatXact. In this paper we investigate which test is the best in small samples in terms of the unconditional exact power. In equal sample cases it is proved that the three tests produce the same unconditional exact power. A symmetry of the unconditional exact power is also found. In unequal sample cases the unconditional exact powers of the three tests are computed and compared. In most cases the Fisher's exact test turns out to be best, but we characterize some cases in which the exact likelihood ratio test has the highest unconditional exact power. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Freidlin B 《Biometrics》1999,55(1):264-267
By focusing on a confidence interval for a nuisance parameter, Berger and Boos (1994, Journal of the American Statistical Association 89, 1012-1016) proposed new unconditional tests. In particular, they showed that, for a 2 x 2 table, this procedure generally was more powerful than Fisher's exact test. This paper utilizes and extends their approach to obtain unconditional tests for combining several 2 x 2 tables and testing for trend and homogeneity in a 2 x K table. The unconditional procedures are compared to the conditional ones by reanalyzing some published biomedical data.  相似文献   

16.
Since the seminal work of Prentice and Pyke, the prospective logistic likelihood has become the standard method of analysis for retrospectively collected case‐control data, in particular for testing the association between a single genetic marker and a disease outcome in genetic case‐control studies. In the study of multiple genetic markers with relatively small effects, especially those with rare variants, various aggregated approaches based on the same prospective likelihood have been developed to integrate subtle association evidence among all the markers considered. Many of the commonly used tests are derived from the prospective likelihood under a common‐random‐effect assumption, which assumes a common random effect for all subjects. We develop the locally most powerful aggregation test based on the retrospective likelihood under an independent‐random‐effect assumption, which allows the genetic effect to vary among subjects. In contrast to the fact that disease prevalence information cannot be used to improve efficiency for the estimation of odds ratio parameters in logistic regression models, we show that it can be utilized to enhance the testing power in genetic association studies. Extensive simulations demonstrate the advantages of the proposed method over the existing ones. A real genome‐wide association study is analyzed for illustration.  相似文献   

17.
Preference testing is commonly used in consumer sensory evaluation. Traditionally, it is done without replication, effectively leading to a single 0/1 (binary) measurement on each panelist. However, to understand the nature of the preference, replicated preference tests are a better approach, resulting in binomial counts of preferences on each panelist. Variability among panelists then leads to overdispersion of the counts when the binomial model is used and to an inflated Type I error rate for statistical tests of preference. Overdispersion can be adjusted by Pearson correction or by other models such as correlated binomial or beta‐binomial. Several methods are suggested or reviewed in this study for analyzing replicated preference tests and their Type I error rates and power are compared. Simulation studies show that all methods have reasonable Type I error rates and similar power. Among them, the binomial model with Pearson adjustment is probably the safest way to analyze replicated preference tests, while a normal model in which the binomial distribution is not assumed is the easiest.  相似文献   

18.
In two‐stage group sequential trials with a primary and a secondary endpoint, the overall type I error rate for the primary endpoint is often controlled by an α‐level boundary, such as an O'Brien‐Fleming or Pocock boundary. Following a hierarchical testing sequence, the secondary endpoint is tested only if the primary endpoint achieves statistical significance either at an interim analysis or at the final analysis. To control the type I error rate for the secondary endpoint, this is tested using a Bonferroni procedure or any α‐level group sequential method. In comparison with marginal testing, there is an overall power loss for the test of the secondary endpoint since a claim of a positive result depends on the significance of the primary endpoint in the hierarchical testing sequence. We propose two group sequential testing procedures with improved secondary power: the improved Bonferroni procedure and the improved Pocock procedure. The proposed procedures use the correlation between the interim and final statistics for the secondary endpoint while applying graphical approaches to transfer the significance level from the primary endpoint to the secondary endpoint. The procedures control the familywise error rate (FWER) strongly by construction and this is confirmed via simulation. We also compare the proposed procedures with other commonly used group sequential procedures in terms of control of the FWER and the power of rejecting the secondary hypothesis. An example is provided to illustrate the procedures.  相似文献   

19.
Multiple diagnostic tests and risk factors are commonly available for many diseases. This information can be either redundant or complimentary. Combining them may improve the diagnostic/predictive accuracy, but also unnecessarily increase complexity, risks, and/or costs. The improved accuracy gained by including additional variables can be evaluated by the increment of the area under (AUC) the receiver‐operating characteristic curves with and without the new variable(s). In this study, we derive a new test statistic to accurately and efficiently determine the statistical significance of this incremental AUC under a multivariate normality assumption. Our test links AUC difference to a quadratic form of a standardized mean shift in a unit of the inverse covariance matrix through a properly linear transformation of all diagnostic variables. The distribution of the quadratic estimator is related to the multivariate Behrens–Fisher problem. We provide explicit mathematical solutions of the estimator and its approximate non‐central F‐distribution, type I error rate, and sample size formula. We use simulation studies to prove that our new test maintains prespecified type I error rates as well as reasonable statistical power under practical sample sizes. We use data from the Study of Osteoporotic Fractures as an application example to illustrate our method.  相似文献   

20.
This paper investigates homogeneity test of rate ratios in stratified matched-pair studies on the basis of asymptotic and bootstrap-resampling methods. Based on the efficient score approach, we develop a simple and computationally tractable score test statistic. Several other homogeneity test statistics are also proposed on the basis of the weighted least-squares estimate and logarithmic transformation. Sample size formulae are derived to guarantee a pre-specified power for the proposed tests at the pre-given significance level. Empirical results confirm that (i) the modified score statistic based on the bootstrap-resampling method performs better in the sense that its empirical type I error rate is much closer to the pre-specified nominal level than those of other tests and its power is greater than those of other tests, and is hence recommended, whilst the statistics based on the weighted least-squares estimate and logarithmic transformation are slightly conservative under some of the considered settings; (ii) the derived sample size formulae are rather accurate in the sense that their empirical powers obtained from the estimated sample sizes are very close to the pre-specified nominal powers. A real example is used to illustrate the proposed methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号