首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Adaptive two‐stage designs allow a data‐driven change of design characteristics during the ongoing trial. One of the available options is an adaptive choice of the test statistic for the second stage of the trial based on the results of the interim analysis. Since there is often only a vague knowledge of the distribution shape of the primary endpoint in the planning phase of a study, a change of the test statistic may then be considered if the data indicate that the assumptions underlying the initial choice of the test are not correct. Collings and Hamilton proposed a bootstrap method for the estimation of the power of the two‐sample Wilcoxon test for shift alternatives. We use this approach for the selection of the test statistic. By means of a simulation study, we show that the gain in terms of power may be considerable when the initial assumption about the underlying distribution was wrong, whereas the loss is relatively small when in the first instance the optimal test statistic was chosen. The results also hold true for comparison with a one‐stage design. Application of the method is illustrated by a clinical trial example.  相似文献   

2.
3.
Paired data arises in a wide variety of applications where often the underlying distribution of the paired differences is unknown. When the differences are normally distributed, the t‐test is optimum. On the other hand, if the differences are not normal, the t‐test can have substantially less power than the appropriate optimum test, which depends on the unknown distribution. In textbooks, when the normality of the differences is questionable, typically the non‐parametric Wilcoxon signed rank test is suggested. An adaptive procedure that uses the Shapiro‐Wilk test of normality to decide whether to use the t‐test or the Wilcoxon signed rank test has been employed in several studies. Faced with data from heavy tails, the U.S. Environmental Protection Agency (EPA) introduced another approach: it applies both the sign and t‐tests to the paired differences, the alternative hypothesis is accepted if either test is significant. This paper investigates the statistical properties of a currently used adaptive test, the EPA's method and suggests an alternative technique. The new procedure is easy to use and generally has higher empirical power, especially when the differences are heavy‐tailed, than currently used methods.  相似文献   

4.
For nonnormal data we suggest a test of location based on a broader family of distributions than normality. Such a test will in a sense fall between the standard parametric and non parametric tests. We see that the Wald tests based on this family of distributions have some advantages over the score tests and that they perform well in comparison to standard parametric and nonparametric tests in a variety of situations. We also consider when and how to apply such tests in practice.  相似文献   

5.
Summary A two‐stage design is cost‐effective for genome‐wide association studies (GWAS) testing hundreds of thousands of single nucleotide polymorphisms (SNPs). In this design, each SNP is genotyped in stage 1 using a fraction of case–control samples. Top‐ranked SNPs are selected and genotyped in stage 2 using additional samples. A joint analysis, combining statistics from both stages, is applied in the second stage. Follow‐up studies can be regarded as a two‐stage design. Once some potential SNPs are identified, independent samples are further genotyped and analyzed separately or jointly with previous data to confirm the findings. When the underlying genetic model is known, an asymptotically optimal trend test (TT) can be used at each analysis. In practice, however, genetic models for SNPs with true associations are usually unknown. In this case, the existing methods for analysis of the two‐stage design and follow‐up studies are not robust across different genetic models. We propose a simple robust procedure with genetic model selection to the two‐stage GWAS. Our results show that, if the optimal TT has about 80% power when the genetic model is known, then the existing methods for analysis of the two‐stage design have minimum powers about 20% across the four common genetic models (when the true model is unknown), while our robust procedure has minimum powers about 70% across the same genetic models. The results can be also applied to follow‐up and replication studies with a joint analysis.  相似文献   

6.
Summary Gilbert, Rossini, and Shankarappa (2005 , Biometrics 61 , 106‐117) present four U‐statistic based tests to compare genetic diversity between different samples. The proposed tests improved upon previously used methods by accounting for the correlations in the data. We find, however, that the same correlations introduce an unacceptable bias in the sample estimators used for the variance and covariance of the inter‐sequence genetic distances for modest sample sizes. Here, we compute unbiased estimators for these and test the resulting improvement using simulated data. We also show that, contrary to the claims in Gilbert et al., it is not always possible to apply the Welch–Satterthwaite approximate t‐test, and we provide explicit formulas for the degrees of freedom to be used when, on the other hand, such approximation is indeed possible.  相似文献   

7.
The application of stabilized multivariate tests is demonstrated in the analysis of a two‐stage adaptive clinical trial with three treatment arms. Due to the clinical problem, the multiple comparisons include tests of superiority as well as a test for non‐inferiority, where non‐inferiority is (because of missing absolute tolerance limits) expressed as linear contrast of the three treatments. Special emphasis is paid to the combination of the three sources of multiplicity – multiple endpoints, multiple treatments, and two stages of the adaptive design. Particularly, the adaptation after the first stage comprises a change of the a‐priori order of hypotheses.  相似文献   

8.
The conditional exact tests of homogeneity of two binomial proportions are often used in small samples, because the exact tests guarantee to keep the size under the nominal level. The Fisher's exact test, the exact chi‐squared test and the exact likelihood ratio test are popular and can be implemented in software StatXact. In this paper we investigate which test is the best in small samples in terms of the unconditional exact power. In equal sample cases it is proved that the three tests produce the same unconditional exact power. A symmetry of the unconditional exact power is also found. In unequal sample cases the unconditional exact powers of the three tests are computed and compared. In most cases the Fisher's exact test turns out to be best, but we characterize some cases in which the exact likelihood ratio test has the highest unconditional exact power. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
When applying the Cochran‐Armitage (CA) trend test for an association between a candidate allele and a disease in a case‐control study, a set of scores must be assigned to the genotypes. Sasieni (1997, Biometrics 53 , 1253–1261) suggested scores for the recessive, additive, and dominant models but did not examine their statistical properties. Using the criteria of minimizing the required sample size of the CA trend test to achieve prespecified type I and type II errors, we show that the scores given by Sasieni (1997) are optimal for the recessive and dominant models and locally optimal for the additive one. Moreover, the additive scores are shown to be locally optimal for the multiplicative model. The tests are applied to a real dataset.  相似文献   

10.
Brent A Coull 《Biometrics》2011,67(2):486-494
Summary In many biomedical investigations, a primary goal is the identification of subjects who are susceptible to a given exposure or treatment of interest. We focus on methods for addressing this question in longitudinal studies when interest focuses on relating susceptibility to a subject's baseline or mean outcome level. In this context, we propose a random intercepts–functional slopes model that relaxes the assumption of linear association between random coefficients in existing mixed models and yields an estimate of the functional form of this relationship. We propose a penalized spline formulation for the nonparametric function that represents this relationship, and implement a fully Bayesian approach to model fitting. We investigate the frequentist performance of our method via simulation, and apply the model to data on the effects of particulate matter on coronary blood flow from an animal toxicology study. The general principles introduced here apply more broadly to settings in which interest focuses on the relationship between baseline and change over time.  相似文献   

11.
The available power tables for use in experimental design only serve for limited practical purposes, since they are restricted to very few levels of significance such as .01, .05, and .10. With these values, however, usually no correction for cumulating error probabilities, for example, by the Dunn-Bonferroni method, can be achieved, because (very) low values of a and sometimes even of α are necessary. Therefore, power tables are presented that encompass a wide range of different values for a (.0005 to .40), for power (.50 to .9995), and for 45 different values of the degrees of freedom for the numerator of the F ratio (u = 1 to 150). Four of the 16 tables are printed. Their use is demonstrated for some paradigmatic problems in univariate and multivariate analyses of variance and regression.  相似文献   

12.
A Monte Carlo simulation was conducted in order to determine the size and power of two proposed tests (the covariance and correlation tests) for three-factor interaction in 2 × 2 × 2 contingency tables. Results were compared to the log-odds ratio test statistic. Simulation showed the correlation test to be more conservative than the covariance test, but less so than the log-odds ratio test. However, the correlation test was the most powerful among the three tests.  相似文献   

13.
Monte Carlo simulation of size and power of two proposed tests for linkage disequilibrium between two genes each with two alleles were investigated. Results were compared with two commonly used statistics, the correlation coefficient r and the log-odds ratio tests. Depending on the sign of the linkage disequilibrium, the new tests were found to be more powerful than either of the correlation or log-odds ratio tests. However, on average (positive and negative linkage disequilibrium) the Chi-square test using the correlation coefficient was to a small extent more powerful than the other tests.  相似文献   

14.
Anthropogenic migration barriers fragment many populations and limit the ability of species to respond to climate‐induced biome shifts. Conservation actions designed to conserve habitat connectivity and mitigate barriers are needed to unite fragmented populations into larger, more viable metapopulations, and to allow species to track their climate envelope over time. Landscape genetic analysis provides an empirical means to infer landscape factors influencing gene flow and thereby inform such conservation actions. However, there are currently many methods available for model selection in landscape genetics, and considerable uncertainty as to which provide the greatest accuracy in identifying the true landscape model influencing gene flow among competing alternative hypotheses. In this study, we used population genetic simulations to evaluate the performance of seven regression‐based model selection methods on a broad array of landscapes that varied by the number and type of variables contributing to resistance, the magnitude and cohesion of resistance, as well as the functional relationship between variables and resistance. We also assessed the effect of transformations designed to linearize the relationship between genetic and landscape distances. We found that linear mixed effects models had the highest accuracy in every way we evaluated model performance; however, other methods also performed well in many circumstances, particularly when landscape resistance was high and the correlation among competing hypotheses was limited. Our results provide guidance for which regression‐based model selection methods provide the most accurate inferences in landscape genetic analysis and thereby best inform connectivity conservation actions.  相似文献   

15.
When a case‐control study is planned to include an internal validation study, the sample size of the study and the proportion of validated observations has to be calculated. There are a variety of alternative methods to accomplish this. In this article some possible procedures will be compared in order to clarify whether considerable differences in the suggested optimal designs occur, dependent on the used method.  相似文献   

16.
We present a survey of sample size formulas derived in other papers for pairwise comparisons of k treatments and for comparisons of k treatments with a control. We consider the calculation of sample sizes with preassigned per‐pair, any‐pair and all‐pairs power for tests that control either the comparisonwise or the experimentwise type I error rate. A comparison exhibits interesting similarities between the parametric, nonparametric and binomial case.  相似文献   

17.
18.
Summary Absence of a perfect reference test is an acknowledged source of bias in diagnostic studies. In the case of tuberculous pleuritis, standard reference tests such as smear microscopy, culture and biopsy have poor sensitivity. Yet meta‐analyses of new tests for this disease have always assumed the reference standard is perfect, leading to biased estimates of the new test’s accuracy. We describe a method for joint meta‐analysis of sensitivity and specificity of the diagnostic test under evaluation, while considering the imperfect nature of the reference standard. We use a Bayesian hierarchical model that takes into account within‐ and between‐study variability. We show how to obtain pooled estimates of sensitivity and specificity, and how to plot a hierarchical summary receiver operating characteristic curve. We describe extensions of the model to situations where multiple reference tests are used, and where index and reference tests are conditionally dependent. The performance of the model is evaluated using simulations and illustrated using data from a meta‐analysis of nucleic acid amplification tests (NAATs) for tuberculous pleuritis. The estimate of NAAT specificity was higher and the sensitivity lower compared to a model that assumed that the reference test was perfect.  相似文献   

19.
The data from the newly available 50 K SNP chip was used for tagging the genome‐wide footprints of positive selection in Holstein–Friesian cattle. For this purpose, we employed the recently described Extended Haplotype Homozygosity test, which detects selection by measuring the characteristics of haplotypes within a single population. To assess formally the significance of these results, we compared the combination of frequency and the Relative Extended Haplotype Homozygosity value of each core haplotype with equally frequent haplotypes across the genome. A subset of the putative regions showing the highest significance in the genome‐wide EHH tests was mapped. We annotated genes to identify possible influence they have in beneficial traits by using the Gene Ontology database. A panel of genes, including FABP3, CLPN3, SPERT, HTR2A5, ABCE1, BMP4 and PTGER2, was detected, which overlapped with the most extreme P‐values. This panel comprises some interesting candidate genes and QTL, representing a broad range of economically important traits such as milk yield and composition, as well as reproductive and behavioural traits. We also report high values of linkage disequilibrium and a slower decay of haplotype homozygosity for some candidate regions harbouring major genes related to dairy quality. The results of this study provide a genome‐wide map of selection footprints in the Holstein genome, and can be used to better understand the mechanisms of selection in dairy cattle breeding.  相似文献   

20.
We consider a conceptual correspondence between the missing data setting, and joint modeling of longitudinal and time‐to‐event outcomes. Based on this, we formulate an extended shared random effects joint model. Based on this, we provide a characterization of missing at random, which is in line with that in the missing data setting. The ideas are illustrated using data from a study on liver cirrhosis, contrasting the new framework with conventional joint models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号