首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 240 毫秒
1.
Sample size determination for case-control studies of chronic disease are often based on the simple 2 X 2 tabular cross-classification of exposure and disease, thereby ignoring stratification which may be considered in the analysis. One consequence of this approach is that the sample size may be inadequate to attain a specified power and size when performing a statistical analysis on J 2 X 2 tables using Cochran's (1954, Biometrics 10, 417-451) statistic or the Mantel-Haenszel (1959, Journal of the National Cancer Institute 22, 719-748) statistic. A sample size formula is derived from Cochran's statistic and it is compared with the corresponding one derived when the data are treated as unstratified, and also with two other formulas proposed for stratified data analysis. The formula developed yields values slightly higher than one recently proposed by Mu?oz and Rosner (1984, Biometrics 40, 995-1004), which assumes that both margins of each 2 X 2 table are fixed, while the present study considers only the case-control margin to be fixed.  相似文献   

2.
R J Connor 《Biometrics》1987,43(1):207-211
Miettinen (1968, Biometrics 24, 339-352) presented an approximation for power and sample size for testing the differences between proportions in the matched-pair case. Duffy (1984, Biometrics 40, 1005-1015) gave the exact power for this case and showed that Miettinen's approximation tends to slightly overestimate the power or underestimate the sample size necessary for the design power. A simple alternative approximation that is more conservative is presented here. In many cases, the sample size for the independent-sample case provides a conservative approximation for the matched-pair design.  相似文献   

3.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

4.
Rosner B  Glynn RJ 《Biometrics》2011,67(2):646-653
The Wilcoxon rank sum test is widely used for two-group comparisons of nonnormal data. An assumption of this test is independence of sampling units both within and between groups, which will be violated in the clustered data setting such as in ophthalmological clinical trials, where the unit of randomization is the subject, but the unit of analysis is the individual eye. For this purpose, we have proposed the clustered Wilcoxon test to account for clustering among multiple subunits within the same cluster (Rosner, Glynn, and Lee, 2003, Biometrics 59, 1089-1098; 2006, Biometrics 62, 1251-1259). However, power estimation is needed to plan studies that use this analytic approach. We have recently published methods for estimating power and sample size for the ordinary Wilcoxon rank sum test (Rosner and Glynn, 2009, Biometrics 65, 188-197). In this article we present extensions of this approach to estimate power for the clustered Wilcoxon test. Simulation studies show a good agreement between estimated and empirical power. These methods are illustrated with examples from randomized trials in ophthalmology. Enhanced power is achieved with use of the subunit as the unit of analysis instead of the cluster using the ordinary Wilcoxon rank sum test.  相似文献   

5.
Confidence intervals and sample sizes.   总被引:2,自引:0,他引:2  
A P Grieve 《Biometrics》1991,47(4):1597-602; discussion 1602-3
In a recent paper, Beal (1989, Biometrics 45, 969-977) considers the problem of determining the appropriate sample size when inference about a parameter theta is to be made on the basis of a confidence interval (CI). He suggests that the sample size should be chosen so that the probability that the length of the CI is less than a given value, conditional on the interval including the true theta, is greater than a specified level. In this note, in which we concentrate on two-sided intervals, this suggestion is examined, as is the effect of uncertainty in our knowledge of the population variance sigma 2 on estimates of sample size.  相似文献   

6.
Horn M  Vollandt R  Dunnett CW 《Biometrics》2000,56(3):879-881
Laska and Meisner (1989, Biometrics 45, 1139-1151) dealt with the problem of testing whether an identified treatment belonging to a set of k + 1 treatments is better than each of the other k treatments. They calculated sample size tables for k = 2 when using multiple t-tests or Wilcoxon-Mann-Whitney tests, both under normality assumptions. In this paper, we provide sample size formulas as well as tables for sample size determination for k > or = 2 when t-tests under normality or Wilcoxon-Mann-Whitney tests under general distribution assumptions are used.  相似文献   

7.
J M Nam 《Biometrics》1987,43(3):701-705
A simple approximate formula for sample sizes for detecting a linear trend in proportions is derived. The formulas for both the uncorrected and corrected Cochran-Armitage test are given. For two binomial proportions these reduce to those given by Casagrande, Pike, and Smith (1978, Biometrics 34, 483-486). Some numerical results of a power study for small sample sizes show that the nominal power corresponding to the approximate sample size is a reasonably good approximation to the actual power.  相似文献   

8.
Two-stage clinical trial stopping rules   总被引:1,自引:0,他引:1  
J D Elashoff  T J Reedy 《Biometrics》1984,40(3):791-795
Two-stage stopping rules for clinical trials are considered. The nominal significance level needed for the second-stage test, for any choice of first-stage significance level, is derived for rules with overall significance levels of .01 and .05 and for studies with either half or two-thirds of the patients analyzed in the first stage. A graphical demonstration is given of the inherent tradeoff between power and expected sample size (or probability of early termination). A specific rule, intermediate to those advocated by Pocock (1977, Biometrika 64, 191-199) and O'Brien and Fleming (1979, Biometrics 5, 549-556), is recommended.  相似文献   

9.
Bilder CR  Loughin TM 《Biometrics》2002,58(1):200-208
Survey respondents are often prompted to pick any number of responses from a set of possible responses. Categorical variables that summarize this kind of data are called pick any/c variables. Counts from surveys that contain a pick any/c variable along with a group variable (r levels) and stratification variable (q levels) can be marginally summarized into an r x c x q contingency table. A question that may naturally arise from this setup is to determine if the group and pick any/c variable are marginally independent given the stratification variable. A test for conditional multiple marginal independence (CMMI) can be used to answer this question. Since subjects may pick any number out of c possible responses, the Cochran (1954, Biometrics 10, 417-451) and Mantel and Haenszel (1959, Journal of the National Cancer Institute 22, 719-748) tests cannot be used directly because they assume that units in the contingency table are independent of each other. Therefore, new testing methods are developed. Cochran's test statistic is extended to r x 2 x q tables, and a modified version of this statistic is proposed to test CMMI. Its sampling distribution can be approximated through bootstrapping. Other CMMI testing methods discussed are bootstrap p-value combination methods and Bonferroni adjustments. Simulation findings suggest that the proposed bootstrap procedures and the Bonferroni adjustments consistently hold the correct size and provide power against various alternatives.  相似文献   

10.
Pfeiffer RM  Ryan L  Litonjua A  Pee D 《Biometrics》2005,61(4):982-991
The case-cohort design for longitudinal data consists of a subcohort sampled at the beginning of the study that is followed repeatedly over time, and a case sample that is ascertained through the course of the study. Although some members in the subcohort may experience events over the study period, we refer to it as the "control-cohort." The case sample is a random sample of subjects not in the control-cohort, who have experienced at least one event during the study period. Different correlations among repeated observations on the same individual are accommodated by a two-level random-effects model. This design allows consistent estimation of all parameters estimable in a cohort design and is a cost-effective way to study the effects of covariates on repeated observations of relatively rare binary outcomes when exposure assessment is expensive. It is an extension of the case-cohort design (Prentice, 1986, Biometrika73, 1-11) and the bidirectional case-crossover design (Navidi, 1998, Biometrics54, 596-605). A simulation study compares the efficiency of the longitudinal case-cohort design to a full cohort analysis, and we find that in certain situations up to 90% efficiency can be obtained with half the sample size required for a full cohort analysis. A bootstrap method is presented that permits testing for intra-subject homogeneity in the presence of unidentifiable nuisance parameters in the two-level random-effects model. As an illustration we apply the design to data from an ongoing study of childhood asthma.  相似文献   

11.
Li Z  Li Y 《Biometrics》2000,56(1):134-138
It is known that using statistical stopping rules in clinical trials can create an artificial heterogeneity of treatment effects in overviews of related trials (Hughes, Freedman, and Pocock, 1992, Biometrics 48, 41-53). If the true treatment effect being tested is small, as is often the case, the homogeneity test by DerSimonian and Laird (1986, Controlled Clinical Trials 7, 177-188) violates the size of the test very severely. This paper provides a new homogeneity test, which preserves the size of the test more accurately. The operating characteristics of the new test are examined through simulations.  相似文献   

12.
Wang YG  Chen Z  Liu J 《Biometrics》2004,60(2):556-561
Nahhas, Wolfe, and Chen (2002, Biometrics58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.  相似文献   

13.
A technique is discussed for analyzing a two-period crossover design for a multicenter trial using identical study protocols. The technique is a modification of the analysis originally proposed by Grizzle (1965, Biometrics 21, 467-480; 1974, Biometrics 30, 727) for analyzing a two-period crossover design when study is not a factor. A mixed model using the first baseline as a covariate is analyzed to increase the power of the test of significance of the treatment-by-period interaction. The baseline values are also used in a preliminary test.  相似文献   

14.
J J Gart  J M Nam 《Biometrics》1990,46(3):637-643
Recently, Beal (1987, Biometrics 43, 941-950) found Mee's modification of Anbar's approximate interval estimation for the difference in binomial parameters to be a good choice in small sample sizes. As this method can be derived from the score theory of Bartlett, it is easily corrected for skewness. Exact numerical evaluation shows that this correction is not as important for this case as for the ratio of binomial parameters (Gart and Nam, 1988, Biometrics 44, 323-338). The score theory is also used to extend this method to the stratified or multiple-table case. Thus, good approximate interval estimates for differences, ratios, and odds ratios of binomial parameters can all be derived from the same general theory.  相似文献   

15.
Jung BC  Jhun M  Lee JW 《Biometrics》2005,61(2):626-628
Ridout, Hinde, and Demétrio (2001, Biometrics 57, 219-223) derived a score test for testing a zero-inflated Poisson (ZIP) regression model against zero-inflated negative binomial (ZINB) alternatives. They mentioned that the score test using the normal approximation might underestimate the nominal significance level possibly for small sample cases. To remedy this problem, a parametric bootstrap method is proposed. It is shown that the bootstrap method keeps the significance level close to the nominal one and has greater power uniformly than the existing normal approximation for testing the hypothesis.  相似文献   

16.
Summary .  We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics 63, 465–474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data.  相似文献   

17.
Cheng Y  Shen Y 《Biometrics》2004,60(4):910-918
For confirmatory trials of regulatory decision making, it is important that adaptive designs under consideration provide inference with the correct nominal level, as well as unbiased estimates, and confidence intervals for the treatment comparisons in the actual trials. However, naive point estimate and its confidence interval are often biased in adaptive sequential designs. We develop a new procedure for estimation following a test from a sample size reestimation design. The method for obtaining an exact confidence interval and point estimate is based on a general distribution property of a pivot function of the Self-designing group sequential clinical trial by Shen and Fisher (1999, Biometrics55, 190-197). A modified estimate is proposed to explicitly account for futility stopping boundary with reduced bias when block sizes are small. The proposed estimates are shown to be consistent. The computation of the estimates is straightforward. We also provide a modified weight function to improve the power of the test. Extensive simulation studies show that the exact confidence intervals have accurate nominal probability of coverage, and the proposed point estimates are nearly unbiased with practical sample sizes.  相似文献   

18.
Decady and Thomas (2000, Biometrics 56, 893-896) propose a first-order corrected Umesh-Loughin-Scherer statistic to test for association in an r x c contingency table with multiple column responses. Agresti and Liu (1999, Biometrics 55, 936-943) point out that such statistics are not invariant to the arbitrary designation of a zero or one to a positive response. This paper shows that, in addition, the proposed testing procedure does not hold the correct size when there are strong pairwise associations between responses.  相似文献   

19.
Song R  Kosorok MR  Cai J 《Biometrics》2008,64(3):741-750
Summary .   Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study.  相似文献   

20.
Testing for random dropouts in repeated measurement data.   总被引:1,自引:0,他引:1  
M S Ridout 《Biometrics》1991,47(4):1617-9; discussion 1619-21
Diggle (1989, Biometrics 45, 1255-1258) proposes a test for random dropouts in repeated measurement data when the experiment has a completely randomized design. It is argued here that logistic regression is a comparable but more flexible technique for studying the occurrence of dropouts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号