首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
J Nam 《Biometrics》1992,48(2):389-395
Woolson, Bean, and Rojas (1986, Biometrics 42, 927-932) present a simple approximation of sample size for Cochran's (1954, Biometrics 10, 417-451) test for detecting association between exposure and disease. It is useful in the design of case-control studies. We derive a sample size formula for Cochran's statistic with continuity correction which guarantees that the actual Type I error rate of the test does not exceed the nominal level. The corrected sample size is necessarily larger than the uncorrected one given by Woolson et al. and the relative difference between the two sample sizes is considerable. Allocation of equal number of cases and controls within each stratum is asymptotically optimal when the costs per case and control are the same. When any effect of stratification is absent, Cochran's stratified test, although valid, is less efficient than the unstratified one except for the important case of a balanced design.  相似文献   

2.
Bilder CR  Loughin TM 《Biometrics》2002,58(1):200-208
Survey respondents are often prompted to pick any number of responses from a set of possible responses. Categorical variables that summarize this kind of data are called pick any/c variables. Counts from surveys that contain a pick any/c variable along with a group variable (r levels) and stratification variable (q levels) can be marginally summarized into an r x c x q contingency table. A question that may naturally arise from this setup is to determine if the group and pick any/c variable are marginally independent given the stratification variable. A test for conditional multiple marginal independence (CMMI) can be used to answer this question. Since subjects may pick any number out of c possible responses, the Cochran (1954, Biometrics 10, 417-451) and Mantel and Haenszel (1959, Journal of the National Cancer Institute 22, 719-748) tests cannot be used directly because they assume that units in the contingency table are independent of each other. Therefore, new testing methods are developed. Cochran's test statistic is extended to r x 2 x q tables, and a modified version of this statistic is proposed to test CMMI. Its sampling distribution can be approximated through bootstrapping. Other CMMI testing methods discussed are bootstrap p-value combination methods and Bonferroni adjustments. Simulation findings suggest that the proposed bootstrap procedures and the Bonferroni adjustments consistently hold the correct size and provide power against various alternatives.  相似文献   

3.
Decady and Thomas (2000, Biometrics 56, 893-896) propose a first-order corrected Umesh-Loughin-Scherer statistic to test for association in an r x c contingency table with multiple column responses. Agresti and Liu (1999, Biometrics 55, 936-943) point out that such statistics are not invariant to the arbitrary designation of a zero or one to a positive response. This paper shows that, in addition, the proposed testing procedure does not hold the correct size when there are strong pairwise associations between responses.  相似文献   

4.
Halperin, Gilbert, and Lachin (1987, Biometrics 43, 71-80) obtain confidence intervals for Pr(X less than Y) based on the two-sample Wilcoxon statistic for continuous data. Their approach is applied here to ordered categorical data and right-censored continuous data, using the generalization zeta = Pr(X less than Y) + 1/2Pr(X = Y) to account for ties. Deviations from nominal coverage probability for various sample sizes and values of zeta are obtained via simulation of either three or six ordered categories based on underlying Poisson or exponential distributions. The simulation results indicate that the proposed method performs quite well, and it is apparently superior to the approach of Hochberg (1981, Communications in Statistics--Theory and Methods A10, 1719-1732) for values of zeta far from 1/2.  相似文献   

5.
Song R  Kosorok MR  Cai J 《Biometrics》2008,64(3):741-750
Summary .   Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study.  相似文献   

6.
Tang ML  Tang NS  Carey VJ 《Biometrics》2004,60(2):550-5; discussion 555
In this article, we consider problems with correlated data that can be summarized in a 2 x 2 table with structural zero in one of the off-diagonal cells. Data of this kind sometimes appear in infectious disease studies and two-step procedure studies. Lui (1998, Biometrics54, 706-711) considered confidence interval estimation of rate ratio based on Fieller-type, Wald-type, and logarithmic transformation statistics. We reexamine the same problem under the context of confidence interval construction on false-negative rate ratio in diagnostic performance when combining two diagnostic tests. We propose a score statistic for testing the null hypothesis of nonunity false-negative rate ratio. Score test-based confidence interval construction for false-negative rate ratio will also be discussed. Simulation studies are conducted to compare the performance of the new derived score test statistic and existing statistics for small to moderate sample sizes. In terms of confidence interval construction, our asymptotic score test-based confidence interval estimator possesses significantly shorter expected width with coverage probability being close to the anticipated confidence level. In terms of hypothesis testing, our asymptotic score test procedure has actual type I error rate close to the pre-assigned nominal level. We illustrate our methodologies with real examples from a clinical laboratory study and a cancer study.  相似文献   

7.
A multiple testing procedure for clinical trials.   总被引:57,自引:0,他引:57  
A multiple testing procedure is proposed for comparing two treatments when response to treatment is both dichotomous (i.e., success or failure) and immediate. The proposed test statistic for each test is the usual (Pearson) chi-square statistic based on all data collected to that point. The maximum number (N) of tests and the number (m1 + m2) of observations collected between successive tests is fixed in advance. The overall size of the procedure is shown to be controlled with virtually the same accuracy as the single sample chi-square test based on N(m1 + m2) observations. The power is also found to be virtually the same. However, by affording the opportunity to terminate early when one treatment performs markedly better than the other, the multiple testing procedure may eliminate the ethical dilemmas that often accompany clinical trials.  相似文献   

8.
T Tango 《Biometrics》1990,46(2):351-357
Tango (1984, Biometrics 40, 15-26) proposed an index for disease clustering in time, applicable to grouped data with the assumption that the population at risk remains fairly uniform over the study period. However, the asymptotic distribution of the index derived under the hypothesis of no clustering was rather complex for simple use. Recently, Whittemore and Keller (1986, Biometrics 42, 218) and Whittemore et al. (1987, Biometrika 74, 631-635) proved that the distribution of the index is asymptotically normal. The present paper indicates that their approximation may be poor for moderately large sample sizes and suggests a central chi-square distribution as a better approximation to the asymptotic distribution of this index.  相似文献   

9.
J M Nam 《Biometrics》1987,43(3):701-705
A simple approximate formula for sample sizes for detecting a linear trend in proportions is derived. The formulas for both the uncorrected and corrected Cochran-Armitage test are given. For two binomial proportions these reduce to those given by Casagrande, Pike, and Smith (1978, Biometrics 34, 483-486). Some numerical results of a power study for small sample sizes show that the nominal power corresponding to the approximate sample size is a reasonably good approximation to the actual power.  相似文献   

10.
A R Willan 《Biometrics》1988,44(1):211-218
In a two-period crossover trial where residual carryover is suspected, it is often advised that first-period data only be used in an analysis appropriate for a parallel design. However, it has been shown (Willan and Pater, 1986, Biometrics 42, 593-599) that the crossover analysis is more powerful than the parallel analysis if the residual carryover, expressed as a proportion of treatment effect, is less than 2- square root of 2(1 - rho), where rho is the intrasubject correlation coefficient. Choosing between the analyses based on the empirical evaluation of this condition is equivalent to choosing the analysis with the larger corresponding test statistic. Approximate nominal significance levels are presented that maintain the desired level when basing the analysis on the maximum test statistic. Furthermore, the power and precision of the analysis based on the maximum test statistic are compared to the crossover and parallel analyses.  相似文献   

11.
Sun L  Kim YJ  Sun J 《Biometrics》2004,60(3):637-643
Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics 49, 13-22; Sun et al., 1999, Biometrics 55, 909-914; Pan, 2001, Biometrics 57, 1245-1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study.  相似文献   

12.
Wang YG  Chen Z  Liu J 《Biometrics》2004,60(2):556-561
Nahhas, Wolfe, and Chen (2002, Biometrics58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.  相似文献   

13.
Two-stage design is a well-known cost-effective way for conducting biomedical studies when the exposure variable is expensive or difficult to measure. Recent research development further allowed one or both stages of the two-stage design to be outcome dependent on a continuous outcome variable. This outcome-dependent sampling feature enables further efficiency gain in parameter estimation and overall cost reduction of the study (e.g. Wang, X. and Zhou, H., 2010. Design and inference for cancer biomarker study with an outcome and auxiliary-dependent subsampling. Biometrics 66, 502-511; Zhou, H., Song, R., Wu, Y. and Qin, J., 2011. Statistical inference for a two-stage outcome-dependent sampling design with a continuous outcome. Biometrics 67, 194-202). In this paper, we develop a semiparametric mixed effect regression model for data from a two-stage design where the second-stage data are sampled with an outcome-auxiliary-dependent sample (OADS) scheme. Our method allows the cluster- or center-effects of the study subjects to be accounted for. We propose an estimated likelihood function to estimate the regression parameters. Simulation study indicates that greater study efficiency gains can be achieved under the proposed two-stage OADS design with center-effects when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a dataset from the Collaborative Perinatal Project.  相似文献   

14.
Kim MY  Xue X  Du Y 《Biometrics》2006,62(3):929-33; discussion 933
An approach for determining the power of a case-cohort study for a single binary exposure variable and a low failure rate was recently proposed by Cai and Zeng (2004, Biometrics 60, 1015-1024). In this article, we show that computing power for a case-cohort study using a standard case-control method yields nearly identical levels of power. An advantage of the case-control approach is that existing sample size software can be used for the calculations. We also propose an additional formula for computing the power of a case-cohort study for the situation when the event is not rare.  相似文献   

15.
New tests for trend in proportions, in the presence of historical control data, are proposed. One such test is a simple score statistic based on a binomial likelihood for the "current" study and beta-binomial likelihoods for each historical control series. A closely related trend statistic based on estimating equations is also proposed. Trend statistics that allow overdispersed proportions in the current study are also developed, including a version of Tarone's (1982, Biometrics 38, 215-220) test that acknowledges sampling variation in the beta distribution parameters, and a trend statistic based on estimating equations. Each such trend test is evaluated with respect to size and power under both binomial and beta-binomial sampling conditions for the current study, and illustrations are provided.  相似文献   

16.
C T Le 《Biometrics》1988,44(1):299-303
This paper is concerned with the issue of testing for trend with correlated binary data. We consider the problem where one has either one or two ears (or eyes) available for analysis at baseline and one wishes to look at changes over time in a dichotomous outcome taking into account the correlation between responses from two ears. A reparameterization of Rosner's (1982, Biometrics 38, 105-114) correlated binary data model is presented and applied to a test for trend where the stratifying variable is age (or any other subject-specific variable). Observed and expected values are calculated for the trend statistic separately for both unilateral and bilateral cases and are then summed to obtain an overall summary statistic. The proposed method is illustrated by a reanalysis of data presented in a published study of the efficacy of antibiotics for the treatment of otitis media.  相似文献   

17.
Rosner B  Glynn RJ 《Biometrics》2011,67(2):646-653
The Wilcoxon rank sum test is widely used for two-group comparisons of nonnormal data. An assumption of this test is independence of sampling units both within and between groups, which will be violated in the clustered data setting such as in ophthalmological clinical trials, where the unit of randomization is the subject, but the unit of analysis is the individual eye. For this purpose, we have proposed the clustered Wilcoxon test to account for clustering among multiple subunits within the same cluster (Rosner, Glynn, and Lee, 2003, Biometrics 59, 1089-1098; 2006, Biometrics 62, 1251-1259). However, power estimation is needed to plan studies that use this analytic approach. We have recently published methods for estimating power and sample size for the ordinary Wilcoxon rank sum test (Rosner and Glynn, 2009, Biometrics 65, 188-197). In this article we present extensions of this approach to estimate power for the clustered Wilcoxon test. Simulation studies show a good agreement between estimated and empirical power. These methods are illustrated with examples from randomized trials in ophthalmology. Enhanced power is achieved with use of the subunit as the unit of analysis instead of the cluster using the ordinary Wilcoxon rank sum test.  相似文献   

18.
It is very common in regression analysis to encounter incompletely observed covariate information. A recent approach to analyse such data is weighted estimating equations (Robins, J. M., Rotnitzky, A. and Zhao, L. P. (1994), JASA, 89, 846-866, and Zhao, L. P., Lipsitz, S. R. and Lew, D. (1996), Biometrics, 52, 1165-1182). With weighted estimating equations, the contribution to the estimating equation from a complete observation is weighted by the inverse of the probability of being observed. We propose a test statistic to assess if the weighted estimating equations produce biased estimates. Our test statistic is similar to the test statistic proposed by DuMouchel and Duncan (1983) for weighted least squares estimates for sample survey data. The method is illustrated using data from a randomized clinical trial on chemotherapy for multiple myeloma.  相似文献   

19.
Lui KJ  Kelly C 《Biometrics》2000,56(1):309-315
Lipsitz et al. (1998, Biometrics 54, 148-160) discussed testing the homogeneity of the risk difference for a series of 2 x 2 tables. They proposed and evaluated several weighted test statistics, including the commonly used weighted least squares test statistic. Here we suggest various important improvements on these test statistics. First, we propose using the one-sided analogues of the test procedures proposed by Lipsitz et al. because we should only reject the null hypothesis of homogeneity when the variation of the estimated risk differences between centers is large. Second, we generalize their study by redesigning the simulations to include the situations considered by Lipsitz et al. (1998) as special cases. Third, we consider a logarithmic transformation of the weighted least squares test statistic to improve the normal approximation of its sampling distribution. On the basis of Monte Carlo simulations, we note that, as long as the mean treatment group size per table is moderate or large (> or = 16), this simple test statistic, in conjunction with the commonly used adjustment procedure for sparse data, can be useful when the number of 2 x 2 tables is small or moderate (< or = 32). In these situations, in fact, we find that our proposed method generally outperforms all the statistics considered by Lipsitz et al. Finally, we include a general guideline about which test statistic should be used in a variety of situations.  相似文献   

20.
Keleş S 《Biometrics》2007,63(1):10-21
Chromatin immunoprecipitation followed by DNA microarray analysis (ChIP-chip methodology) is an efficient way of mapping genome-wide protein-DNA interactions. Data from tiling arrays encompass DNA-protein interaction measurements on thousands or millions of short oligonucleotides (probes) tiling a whole chromosome or genome. We propose a new model-based method for analyzing ChIP-chip data. The proposed model is motivated by the widely used two-component multinomial mixture model of de novo motif finding. It utilizes a hierarchical gamma mixture model of binding intensities while incorporating inherent spatial structure of the data. In this model, genomic regions belong to either one of the following two general groups: regions with a local protein-DNA interaction (peak) and regions lacking this interaction. Individual probes within a genomic region are allowed to have different localization rates accommodating different binding affinities. A novel feature of this model is the incorporation of a distribution for the peak size derived from the experimental design and parameters. This leads to the relaxation of the fixed peak size assumption that is commonly employed when computing a test statistic for these types of spatial data. Simulation studies and a real data application demonstrate good operating characteristics of the method including high sensitivity with small sample sizes when compared to available alternative methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号