首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the “working correlation structure” is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two‐group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs—exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster.  相似文献   

2.
Longitudinal studies are often applied in biomedical research and clinical trials to evaluate the treatment effect. The association pattern within the subject must be considered in both sample size calculation and the analysis. One of the most important approaches to analyze such a study is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which “working correlation structure” is introduced and the association pattern within the subject depends on a vector of association parameters denoted by ρ. The explicit sample size formulas for two‐group comparison in linear and logistic regression models are obtained based on the GEE method by Liu and Liang. For cluster randomized trials (CRTs), researchers proposed the optimal sample sizes at both the cluster and individual level as a function of sampling costs and the intracluster correlation coefficient (ICC). In these approaches, the optimal sample sizes depend strongly on the ICC. However, the ICC is usually unknown for CRTs and multicenter trials. To overcome this shortcoming, Van Breukelen et al. consider a range of possible ICC values identified from literature reviews and present Maximin designs (MMDs) based on relative efficiency (RE) and efficiency under budget and cost constraints. In this paper, the optimal sample size and number of repeated measurements using GEE models with an exchangeable working correlation matrix is proposed under the considerations of fixed budget, where “optimal” refers to maximum power for a given sampling budget. The equations of sample size and number of repeated measurements for a known parameter value ρ are derived and a straightforward algorithm for unknown ρ is developed. Applications in practice are discussed. We also discuss the existence of the optimal design when an AR(1) working correlation matrix is assumed. Our proposed method can be extended under the scenarios when the true and working correlation matrix are different.  相似文献   

3.
Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under‐ or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re‐assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one‐sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups.  相似文献   

4.
The Petersen–Lincoln estimator has been used to estimate the size of a population in a single mark release experiment. However, the estimator is not valid when the capture sample and recapture sample are not independent. We provide an intuitive interpretation for “independence” between samples based on 2 × 2 categorical data formed by capture/non‐capture in each of the two samples. From the interpretation, we review a general measure of “dependence” and quantify the correlation bias of the Petersen–Lincoln estimator when two types of dependences (local list dependence and heterogeneity of capture probability) exist. An important implication in the census undercount problem is that instead of using a post enumeration sample to assess the undercount of a census, one should conduct a prior enumeration sample to avoid correlation bias. We extend the Petersen–Lincoln method to the case of two populations. This new estimator of the size of the shared population is proposed and its variance is derived. We discuss a special case where the correlation bias of the proposed estimator due to dependence between samples vanishes. The proposed method is applied to a study of the relapse rate of illicit drug use in Taiwan. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Cluster randomized studies are common in community trials. The standard method for estimating sample size for cluster randomized studies assumes a common cluster size. However often in cluster randomized studies, size of the clusters vary. In this paper, we derive sample size estimation for continuous outcomes for cluster randomized studies while accounting for the variability due to cluster size. It is shown that the proposed formula for estimating total cluster size can be obtained by adding a correction term to the traditional formula which uses the average cluster size. Application of these results to the design of a health promotion educational intervention study is discussed.  相似文献   

6.
Standard sample size calculation formulas for stepped wedge cluster randomized trials (SW-CRTs) assume that cluster sizes are equal. When cluster sizes vary substantially, ignoring this variation may lead to an under-powered study. We investigate the relative efficiency of a SW-CRT with varying cluster sizes to equal cluster sizes, and derive variance estimators for the intervention effect that account for this variation under a mixed effects model—a commonly used approach for analyzing data from cluster randomized trials. When cluster sizes vary, the power of a SW-CRT depends on the order in which clusters receive the intervention, which is determined through randomization. We first derive a variance formula that corresponds to any particular realization of the randomized sequence and propose efficient algorithms to identify upper and lower bounds of the power. We then obtain an “expected” power based on a first-order approximation to the variance formula, where the expectation is taken with respect to all possible randomization sequences. Finally, we provide a variance formula for more general settings where only the cluster size arithmetic mean and coefficient of variation, instead of exact cluster sizes, are known in the design stage. We evaluate our methods through simulations and illustrate that the average power of a SW-CRT decreases as the variation in cluster sizes increases, and the impact is largest when the number of clusters is small.  相似文献   

7.
The stepped wedge cluster randomized trial (SW-CRT) is an increasingly popular design for evaluating health service delivery or policy interventions. An essential consideration of this design is the need to account for both within-period and between-period correlations in sample size calculations. Especially when embedded in health care delivery systems, many SW-CRTs may have subclusters nested in clusters, within which outcomes are collected longitudinally. However, existing sample size methods that account for between-period correlations have not allowed for multiple levels of clustering. We present computationally efficient sample size procedures that properly differentiate within-period and between-period intracluster correlation coefficients in SW-CRTs in the presence of subclusters. We introduce an extended block exchangeable correlation matrix to characterize the complex dependencies of outcomes within clusters. For Gaussian outcomes, we derive a closed-form sample size expression that depends on the correlation structure only through two eigenvalues of the extended block exchangeable correlation structure. For non-Gaussian outcomes, we present a generic sample size algorithm based on linearization and elucidate simplifications under canonical link functions. For example, we show that the approximate sample size formula under a logistic linear mixed model depends on three eigenvalues of the extended block exchangeable correlation matrix. We provide an extension to accommodate unequal cluster sizes and validate the proposed methods via simulations. Finally, we illustrate our methods in two real SW-CRTs with subclusters.  相似文献   

8.
The performance of diagnostic tests is often evaluated by estimating their sensitivity and specificity with respect to a traditionally accepted standard test regarded as a “gold standard” in making the diagnosis. Correlated samples of binary data arise in many fields of application. The fundamental unit for analysis is occasionally the site rather than the subject in site-specific studies. Statistical methods that take into account the within-subject corelation should be employed to estimate the sensitivity and the specificity of diagnostic tests since site-specific results within a subject can be highly correlated. I introduce several statistical methods for the estimation of the sensitivity and the specificity of sitespecific diagnostic tests. I apply these techniques to the data from a study involving an enzymatic diagnostic test to motivate and illustrate the estimation of the sensitivity and the specificity of periodontal diagnostic tests. I present results from a simulation study for the estimation of diagnostic sensitivity when the data are correlated within subjects. Through a simulation study, I compare the performance of the binomial estimator pCBE, the ratio estimator pCBE, the weighted estimator pCWE, the intracluster correlation estimator pCIC, and the generalized estimating equation (GEE) estimator PCGEE in terms of biases, observed variances, mean squared errors (MSE), relative efficiencies of their variances and 95 per cent coverage proportions. I recommend using PCBE when σ == 0. I recommend use of the weighted estimator PCWE when σ = 0.6. When σ == 0.2 or σ == 0.4, and the number of subjects is at least 30, PCGEE performs well.  相似文献   

9.
“Covariate adjustment” in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates”). The baseline variables could include, for example, age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We focus on the analysis of covariance (ANCOVA) estimator, which involves fitting a linear model for the outcome given the treatment arm and baseline variables, and trials that use simple randomization with equal probability of assignment to treatment and control. We prove the following new (to the best of our knowledge) robustness property of ANCOVA to arbitrary model misspecification: Not only is the ANCOVA point estimate consistent (as proved by Yang and Tsiatis, 2001) but so is its standard error. This implies that confidence intervals and hypothesis tests conducted as if the linear model were correct are still asymptotically valid even when the linear model is arbitrarily misspecified, for example, when the baseline variables are nonlinearly related to the outcome or there is treatment effect heterogeneity. We also give a simple, robust formula for the variance reduction (equivalently, sample size reduction) from using ANCOVA. By reanalyzing completed randomized trials for mild cognitive impairment, schizophrenia, and depression, we demonstrate how ANCOVA can achieve variance reductions of 4 to 32%.  相似文献   

10.
Many late-phase clinical trials recruit subjects at multiple study sites. This introduces a hierarchical structure into the data that can result in a power-loss compared to a more homogeneous single-center trial. Building on a recently proposed approach to sample size determination, we suggest a sample size recalculation procedure for multicenter trials with continuous endpoints. The procedure estimates nuisance parameters at interim from noncomparative data and recalculates the sample size required based on these estimates. In contrast to other sample size calculation methods for multicenter trials, our approach assumes a mixed effects model and does not rely on balanced data within centers. It is therefore advantageous, especially for sample size recalculation at interim. We illustrate the proposed methodology by a study evaluating a diabetes management system. Monte Carlo simulations are carried out to evaluate operation characteristics of the sample size recalculation procedure using comparative as well as noncomparative data, assessing their dependence on parameters such as between-center heterogeneity, residual variance of observations, treatment effect size and number of centers. We compare two different estimators for between-center heterogeneity, an unadjusted and a bias-adjusted estimator, both based on quadratic forms. The type 1 error probability as well as statistical power are close to their nominal levels for all parameter combinations considered in our simulation study for the proposed unadjusted estimator, whereas the adjusted estimator exhibits some type 1 error rate inflation. Overall, the sample size recalculation procedure can be recommended to mitigate risks arising from misspecified nuisance parameters at the planning stage.  相似文献   

11.
Sample size calculations based on two‐sample comparisons of slopes in repeated measurements have been reported by many investigators. In contrast, the literature has paid relatively little attention to the design and analysis of K‐sample trials in repeated measurements studies where K is 3 or greater. Jung and Ahn (2003) derived a closed sample size formula for two‐sample comparisons of slopes by taking into account the impact of missing data. We extend their method to compare K‐sample slopes in repeated measurement studies using the generalized estimating equation (GEE) approach based on independent working correlation structure. We investigate the performance of the sample size formula since the sample size formula is based on asymptotic theory. The proposed sample size formula is illustrated using a clinical trial example. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
Shults J  Morrow AL 《Biometrics》2002,58(3):521-530
This article considers data with two levels of association. Our motivating example is an international intervention trial with repeated observations on subjects who reside within geographically defined clusters. To account for the potential correlation within clusters and within the repeated measurements that pertain to each subject, we apply a method based on generalized estimating equations for a correlation structure proposed by Lefkopoulou, Moore, and Ryan (1989, Journal of the American Statistical Association 84, 810-815).  相似文献   

13.
Fewster RM 《Biometrics》2011,67(4):1518-1531
Summary In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over‐reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new “striplet” estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip‐sampling, distance‐sampling, and quadrat‐sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.  相似文献   

14.
The Aalen–Johansen estimator is the standard nonparametric estimator of the cumulative incidence function in competing risks. Estimating its variance in small samples has attracted some interest recently, together with a critique of the usual martingale‐based estimators. We show that the preferred estimator equals a Greenwood‐type estimator that has been derived as a recursion formula using counting processes and martingales in a more general multistate framework. We also extend previous simulation studies on estimating the variance of the Aalen–Johansen estimator in small samples to left‐truncated observation schemes, which may conveniently be handled within the counting processes framework. This investigation is motivated by a real data example on spontaneous abortion in pregnancies exposed to coumarin derivatives, where both competing risks and left‐truncation have recently been shown to be crucial methodological issues (Meister and Schaefer (2008), Reproductive Toxicology 26 , 31–35). Multistate‐type software and data are available online to perform the analyses. The Greenwood‐type estimator is recommended for use in practice.  相似文献   

15.
Cai J  Sen PK  Zhou H 《Biometrics》1999,55(1):182-189
A random effects model for analyzing multivariate failure time data is proposed. The work is motivated by the need for assessing the mean treatment effect in a multicenter clinical trial study, assuming that the centers are a random sample from an underlying population. An estimating equation for the mean hazard ratio parameter is proposed. The proposed estimator is shown to be consistent and asymptotically normally distributed. A variance estimator, based on large sample theory, is proposed. Simulation results indicate that the proposed estimator performs well in finite samples. The proposed variance estimator effectively corrects the bias of the naive variance estimator, which assumes independence of individuals within a group. The methodology is illustrated with a clinical trial data set from the Studies of Left Ventricular Dysfunction. This shows that the variability of the treatment effect is higher than found by means of simpler models.  相似文献   

16.
Wang CY 《Biometrics》2000,56(1):106-112
Consider the problem of estimating the correlation between two nutrient measurements, such as the percent energy from fat obtained from a food frequency questionnaire (FFQ) and that from repeated food records or 24-hour recalls. Under a classical additive model for repeated food records, it is known that there is an attenuation effect on the correlation estimation if the sample average of repeated food records for each subject is used to estimate the underlying long-term average. This paper considers the case in which the selection probability of a subject for participation in the calibration study, in which repeated food records are measured, depends on the corresponding FFQ value, and the repeated longitudinal measurement errors have an autoregressive structure. This paper investigates a normality-based estimator and compares it with a simple method of moments. Both methods are consistent if the first two moments of nutrient measurements exist. Furthermore, joint estimating equations are applied to estimate the correlation coefficient and related nuisance parameters simultaneously. This approach provides a simple sandwich formula for the covariance estimation of the estimator. Finite sample performance is examined via a simulation study, and the proposed weighted normality-based estimator performs well under various distributional assumptions. The methods are applied to real data from a dietary assessment study.  相似文献   

17.
This paper reviews and discusses the role of Empirical Bayes methodology in medical statistics in the last 50 years. It gives some background on the origin of the empirical Bayes approach and its link with the famous Stein estimator. The paper describes the application in four important areas in medical statistics: disease mapping, health care monitoring, meta‐analysis, and multiple testing. It ends with a warning that the application of the outcome of an empirical Bayes analysis to the individual “subjects” is a delicate matter that should be handled with prudence and care.  相似文献   

18.
Clinical trials are often planned with high uncertainty about the variance of the primary outcome variable. A poor estimate of the variance, however, may lead to an over‐ or underpowered study. In the internal pilot study design, the sample variance is calculated at an interim step and the sample size can be adjusted if necessary. The available recalculation procedures use the data of those patients for sample size recalculation that have already completed the study. In this article, we consider a variance estimator that takes into account both the data at the endpoint and at an intermediate point of the treatment phase. We derive asymptotic properties of this estimator and the relating sample size recalculation procedure. In a simulation study, the performance of the proposed approach is evaluated and compared with the procedure that uses only long‐term data. Simulation results demonstrate that the sample size resulting from the proposed procedure shows in general a smaller variability. At the same time, the Type I error rate is not inflated and the achieved power is close to the desired value.  相似文献   

19.
Marginal structural models for time‐fixed treatments fit using inverse‐probability weighted estimating equations are increasingly popular. Nonetheless, the resulting effect estimates are subject to finite‐sample bias when data are sparse, as is typical for large‐sample procedures. Here we propose a semi‐Bayes estimation approach which penalizes or shrinks the estimated model parameters to improve finite‐sample performance. This approach uses simple symmetric data‐augmentation priors. Limited simulation experiments indicate that the proposed approach reduces finite‐sample bias and improves confidence‐interval coverage when the true values lie within the central “hill” of the prior distribution. We illustrate the approach with data from a nonexperimental study of HIV treatments.  相似文献   

20.
We discuss a method for simultaneously estimating the fixed parameters of a generalized linear mixed-effects model and the random-effects distribution of which no parametric assumption is made. In addition, classifying subjects into clusters according to the random regression coefficients is a natural by-product of the proposed method. An alternative approach to maximum-likelihood method, maximum-penalized-likelihood method, is used to avoid estimating “too many” clusters. Consistency and asymptotic normality properties of the estimators are presented. We also provide robust variance estimators of the fixed parameters estimators which remain consistent even in presence of misspecification. The methodology is illustrated by an application to a weight loss study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号