首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For two independent binomial samples, the usual exact confidence interval for the odds ratio based on the conditional approach can be very conservative. Recently, Agresti and Min (2002) showed that the unconditional intervals are preferable to conditional intervals with small sample sizes. We use the unconditional approach to obtain a modified interval, which has shorter length, and its coverage probability is closer to and at least the nominal confidence coefficient.  相似文献   

2.
In the situation of several 2 × 2 tables the asymptotic relative efficiencies of certain jackknife estimators of a common odds ratio are investigated in the case that the number of tables is fixed while the sample sizes within each table tend to infinity. The estimators show very good results over a wide range of parameters. Some situations in which the estimators have low asymptotic relative efficiency are pointed out:.  相似文献   

3.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

4.
A 2 × 2 table is analyzed from the conditional viewpoint, using Fisher's exact test, for which there is abundant software nowadays (StatXact, SPSS with the module “Exact Test”, …). Nevertheless, because it is well-nigh impossible to work the test out “by hand”, it is customary in many books to analyze the table, in an approximate fashion, by using the classic chi-square test with the appropriate continuity correction, and this is what many researchers do in practice. Unfortunately little research has been carried out on the validity conditions of the test (remember the classic advice that the minimum expected quantity E should be larger than or equal to 5), so that it is applied indiscriminately with the obvious danger of obtaining erroneous results. In this paper the exact validity conditions (which depend not only on E, but also on the real P-value, on the size of the sample, on the number of tails of the test and on the continuity correction used) are determined, showing that the classic condition E > 5 can be either very strict or quite liberal (depending on the circumstances) and it can even be necessary for the test to have E > 20. A similar study is made of the case of using the binomial approximation, while the Poisson approximation is also discussed. The article ends with a list of rules and precautions to be borne in mind by researchers using the approximate methods. The strict rules are complex enough to make the researcher want to use the exact test, but simplified (and conservative) rules are also given for those unable to do so.  相似文献   

5.
Al-Shiha and Yang (1999) proposed a multistage procedure for analysing unreplicated factorial experiments, which is based on the statistic that is derived from the generalised likelihood ratio test statistic under the assumption of normality. It was shown by their simulation study that the method is quite competitive with Lenth's (1989) method. In their paper, because of the difficulty of determining the null distribution analytically, the quantiles of the null distribution were empirically simulated. In this paper, we give the exact null distribution of their test statistic, which makes it possible to calculate the critical values of the test.  相似文献   

6.
An efficient recursive polynomial multiplication method is proposed for exact unconditional power calculation for unordered 2 × K contingency table with up to moderate sample size. Our method can be applied to the family of cell-additive statistics which includes the Freeman-Halton statistic, the Pearson χ2 statistic and the likelihood ratio statistic. We illustrate our proposed method by several numerical examples.  相似文献   

7.
In this paper the analysis of several proportions in a comparative study setting is discussed. The case of ordinal grouping variates is considered. F statistics are formulated to test for trend in the proportions over the scored values of the determinant variate. The null chi-square or F (t square) functions are presented separately for the unstratified and stratified analysis, and in either situation the corresponding functions with angular transformed proportions are also expressed. Generalizations to deal with the parameters in the nonnull range are outlined. Throughout, the intimate relation between the presented statistics and standard methods is pointed out.  相似文献   

8.
Case‐parent trio studies considering genotype data from children affected by a disease and their parents are frequently used to detect single nucleotide polymorphisms (SNPs) associated with disease. The most popular statistical tests for this study design are transmission/disequilibrium tests (TDTs). Several types of these tests have been developed, for example, procedures based on alleles or genotypes. Therefore, it is of great interest to examine which of these tests have the highest statistical power to detect SNPs associated with disease. Comparisons of the allelic and the genotypic TDT for individual SNPs have so far been conducted based on simulation studies, since the test statistic of the genotypic TDT was determined numerically. Recently, however, it has been shown that this test statistic can be presented in closed form. In this article, we employ this analytic solution to derive equations for calculating the statistical power and the required sample size for different types of the genotypic TDT. The power of this test is then compared with the one of the corresponding score test assuming the same mode of inheritance as well as the allelic TDT based on a multiplicative mode of inheritance, which is equivalent to the score test assuming an additive mode of inheritance. This is, thus, the first time the power of these tests are compared based on equations, yielding instant results and omitting the need for time‐consuming simulation studies. This comparison reveals that these tests have almost the same power, with the score test being slightly more powerful.  相似文献   

9.
Regal RR  Hook EB 《Biometrics》1999,55(4):1241-1246
An exact conditional test for an M-way log-linear interaction in a fully observed 2M contingency table is formulated. From this is derived a procedure for interval estimation of the total count N in a 2M contingency table, one of whose entries is unobserved. This procedure has an immediate application to interval estimation of the size of a closed population from incomplete, overlapping lists of records, as in capture-recapture analysis of epidemiological data. Data on the prevalence of spina bifida in live births in upstate New York in 1969-1974 illustrate this application.  相似文献   

10.
Computations have been performed to find an adequate definition of exact two-sided probabilities in 2times2 contingency tables. It turns out, that both uncorrected χ2 and Yates' correction for continuity give only unsatisfactory approximations to the exact probabilities of the hypergeometric distribution. The latter are therefore recommended for general use.  相似文献   

11.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

12.
The paper considers the problem of determining the number of matched sets in 1 : M matched case-control studies with a categorical exposure having k + 1 categories, k > or = 1. The basic interest lies in constructing a test statistic to test whether the exposure is associated with the disease. Estimates of the k odds ratios for 1 : M matched case-control studies with dichotomous exposure and for 1 : 1 matched case-control studies with exposure at several levels are presented in Breslow and Day (1980), but results holding in full generality were not available so far. We propose a score test for testing the hypothesis of no association between disease and the polychotomous exposure. We exploit the power function of this test statistic to calculate the required number of matched sets to detect specific departures from the null hypothesis of no association. We also consider the situation when there is a natural ordering among the levels of the exposure variable. For ordinal exposure variables, we propose a test for detecting trend in disease risk with increasing levels of the exposure variable. Our methods are illustrated with two datasets, one is a real dataset on colorectal cancer in rats and the other a simulated dataset for studying disease-gene association.  相似文献   

13.
In comparing two independent binomial proportions by modified X2 tests: Gart's modified likelihood-ratio, Overall and Starbuck's “tailored F”, Pearson's adjusted X2[=X2 multiplied by (n - 1)/n] and its skewness-corrected version proposed by Berchtold, it was found that the last two have error probabilities that in the mean are close to the significance level.  相似文献   

14.
15.
In a randomized clinical trial (RCT), noncompliance with an assigned treatment can occur due to serious side effects, while missing outcomes on patients may happen due to patients' withdrawal or loss to follow up. To avoid the possible loss of power to detect a given risk difference (RD) of interest between two treatments, it is essentially important to incorporate the information on noncompliance and missing outcomes into sample size calculation. Under the compound exclusion restriction model proposed elsewhere, we first derive the maximum likelihood estimator (MLE) of the RD among compliers between two treatments for a RCT with noncompliance and missing outcomes and its asymptotic variance in closed form. Based on the MLE with tanh(-1)(x) transformation, we develop an asymptotic test procedure for testing equality of two treatment effects among compliers. We further derive a sample size calculation formula accounting for both noncompliance and missing outcomes for a desired power 1 - beta at a nominal alpha-level. To evaluate the performance of the test procedure and the accuracy of the sample size calculation formula, we employ Monte Carlo simulation to calculate the estimated Type I error and power of the proposed test procedure corresponding to the resulting sample size in a variety of situations. We find that both the test procedure and the sample size formula developed here can perform well. Finally, we include a discussion on the effects of various parameters, including the proportion of compliers, the probability of non-missing outcomes, and the ratio of sample size allocation, on the minimum required sample size.  相似文献   

16.
Tests for a monotonic trend between an ordered categorical exposure and disease status are routinely carried out from case‐control data using the Mantel‐extension trend test or the asymptotically equivalent Cochran‐Armitage test. In this study, we considered two alternative tests based on isotonic regression, namely an order‐restricted likelihood ratio test and an isotonic modification of the Mantel‐extension test extending the recent proposal by Mancuso, Ahn and Chen (2001) to case‐control data. Furthermore, we considered three tests based on contrasts, namely a single contrast (SC) test based on Schaafsma's coefficients, the Dosemeci and Benichou (DB) test, a multiple contrast (MC) test based on the Helmert, reverse‐Helmert and linear contrasts and we derived their case‐control versions. Using simulations, we compared the statistical properties of these five alternative tests to those of the Mantel‐extension test under various patterns including no relationship, as well as monotonic and non‐monotonic relationships between exposure and disease status. In the case of no relationship, all tests had close to nominal type I error except in situations combining a very unbalanced exposure distribution and small sample size, where the asymptotic versions of the three tests based on contrasts were highly anticonservative. The use of bootstrap instead of asymptotic versions corrected this anticonservatism. For monotonic patterns, all tests had close powers. For non monotonic patterns, the DB‐test showed the most favourable results as it was the least powerful test. The two tests based on isotonic regression were the most powerful tests and the Mantel‐extension test, the SC‐ and MC‐tests had in‐between powers. The six tests were applied to data from a case‐control study investigating the relationship between alcohol consumption and risk of laryngeal cancer in Turkey. In situations with no evidence of a monotonic relationship between exposure and disease status, the three tests based on contrasts did not conclude in favour of a significant trend whereas all the other tests did. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
Matched case-control studies often include pairs with incomplete exposure information. This work presents and compares two estimators for the odds ratio that can be used when the exposures of some of the cases and controls are missing. A simulation study shows that the estimator that uses the marginal exposure frequencies is usually more efficient than the estimator based on discordant pairs.  相似文献   

18.
Let d = p2 ? p1 be the difference between two binomial proportions obtained from two independent trials. For parameter d, three pairs of hypothesis may be of interest: H1: d ≤ δ vs. K1: d > δ; H2: d ? (δ1, δ2) vs. K2: d ∈ (δ1, δ2); and H3: d ∈ [δ1, δ2] vs. K3: d ? [δ1, δ2], where Hi is the null hypothesis and Ki is the alternative hypothesis. These tests are useful in clinical trials, pharmacological and vaccine studies and in statistics generally. The three problems may be investigated by exact unconditional tests when the sample sizes are moderate. Otherwise, one should use approximate (or asymptotical) tests generally based on a Z‐statistics like those suggested in the paper. The article defines a new procedure for testing H2 or H3, demonstrates that this is more powerful than tests based on confidence intervals (the classic TOST – two one sided tests – test), defines two corrections for continuity which reduce the liberality of the three tests, and selects the one that behaves better. The programs for executing the unconditional exact and asymptotic tests described in the paper can be loaded at http://www.ugr.es/~bioest/software.htm. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
20.
The Mantel test is widely used to test the linear or monotonic independence of the elements in two distance matrices. It is one of the few appropriate tests when the hypothesis under study can only be formulated in terms of distances; this is often the case with genetic data. In particular, the Mantel test has been widely used to test for spatial relationship between genetic data and spatial layout of the sampling locations. We describe the domain of application of the Mantel test and derived forms. Formula development demonstrates that the sum-of-squares (SS) partitioned in Mantel tests and regression on distance matrices differs from the SS partitioned in linear correlation, regression and canonical analysis. Numerical simulations show that in tests of significance of the relationship between simple variables and multivariate data tables, the power of linear correlation, regression and canonical analysis is far greater than that of the Mantel test and derived forms, meaning that the former methods are much more likely than the latter to detect a relationship when one is present in the data. Examples of difference in power are given for the detection of spatial gradients. Furthermore, the Mantel test does not correctly estimate the proportion of the original data variation explained by spatial structures. The Mantel test should not be used as a general method for the investigation of linear relationships or spatial structures in univariate or multivariate data. Its use should be restricted to tests of hypotheses that can only be formulated in terms of distances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号