首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 685 毫秒
1.
Stepped wedge cluster randomised trials introduce interventions to groups of clusters in a random order and have been used to evaluate interventions for health and wellbeing. Standardised guidance for reporting stepped wedge trials is currently absent, and a range of potential analytic approaches have been described. We systematically identified and reviewed recently published (2010 to 2014) analyses of stepped wedge trials. We extracted data and described the range of reporting and analysis approaches taken across all studies. We critically appraised the strategy described by three trials chosen to reflect a range of design characteristics. Ten reports of completed analyses were identified. Reporting varied: seven of the studies included a CONSORT diagram, and only five also included a diagram of the intervention rollout. Seven assessed the balance achieved by randomisation, and there was considerable heterogeneity among the approaches used. Only six reported the trend in the outcome over time. All used both ‘horizontal’ and ‘vertical’ information to estimate the intervention effect: eight adjusted for time with a fixed effect, one used time as a condition using a Cox proportional hazards model, and one did not account for time trends. The majority used simple random effects to account for clustering and repeat measures, assuming a common intervention effect across clusters. Outcome data from before and after the rollout period were often included in the primary analysis. Potential lags in the outcome response to the intervention were rarely investigated. We use three case studies to illustrate different approaches to analysis and reporting. There is considerable heterogeneity in the reporting of stepped wedge cluster randomised trials. Correct specification of the time-trend underlies the validity of the analytical approaches. The possibility that intervention effects vary by cluster or over time should be considered. Further work should be done to standardise the reporting of the design, attrition, balance, and time-trends in stepped wedge trials.  相似文献   

2.
ABSTRACT: BACKGROUND: There is evidence to suggest that delivery of diabetes self-management support by diabetes educators in primary care may improve patient care processes and patient clinical outcomes; however, the evaluation of such a model in primary care is nonexistent in Canada. This article describes the design for the evaluation of the implementation of Mobile Diabetes Education Teams (MDETs) in primary care settings in Canada. METHODS: This study will have a non-blinded, cluster-randomized controlled trial stepped wedge design. A cluster, randomized controlled trial will be used to evaluate the Mobile Diabetes Education Teams' intervention in improving patient clinical and care process outcomes. A total of 1,200 patient charts at participating primary care sites will be reviewed for data extraction. Eligible patients will be those aged >=18, who have type 2 diabetes and a hemoglobin A1c (HbA1c) of >=8 %. Clusters (that is, primary care sites) will be randomized to the intervention and control group using a block randomization procedure within practice size as the blocking factor. A stepped wedge design will be used to sequentially roll out the intervention so that all clusters eventually receive the intervention. The time at which each cluster begins the intervention is randomized to one of the four roll out periods (0, 6, 12, and 18 months). Clusters that are randomized into the intervention later will act as the control for those receiving the intervention earlier. The primary outcome measure will be the difference in the proportion of patients who achieve the recommended HbA1c target of <=7 % between intervention and control groups. Qualitative work (in-depth interviews with primary care physicians, MDET educators and patients; and MDET educators' field notes and debriefing sessions) will be undertaken to assess the implementation process and effectiveness of the MDET intervention.Trial registrationClinicalTrials.gov NCT01553266.  相似文献   

3.
Generalized estimating equations (GEE) are used in the analysis of cluster randomized trials (CRTs) because: 1) the resulting intervention effect estimate has the desired marginal or population-averaged interpretation, and 2) most statistical packages contain programs for GEE. However, GEE tends to underestimate the standard error of the intervention effect estimate in CRTs. In contrast, penalized quasi-likelihood (PQL) estimates the standard error of the intervention effect in CRTs much better than GEE but is used less frequently because: 1) it generates an intervention effect estimate with a conditional, or cluster-specific, interpretation, and 2) PQL is not a part of most statistical packages. We propose taking the variance estimator from PQL and re-expressing it as a sandwich-type estimator that could be easily incorporated into existing GEE packages, thereby making GEE useful for the analysis of CRTs. Using numerical examples and data from an actual CRT, we compare the performance of this variance estimator to others proposed in the literature, and we find that our variance estimator performs as well as or better than its competitors.  相似文献   

4.
Standard sample size calculation formulas for stepped wedge cluster randomized trials (SW-CRTs) assume that cluster sizes are equal. When cluster sizes vary substantially, ignoring this variation may lead to an under-powered study. We investigate the relative efficiency of a SW-CRT with varying cluster sizes to equal cluster sizes, and derive variance estimators for the intervention effect that account for this variation under a mixed effects model—a commonly used approach for analyzing data from cluster randomized trials. When cluster sizes vary, the power of a SW-CRT depends on the order in which clusters receive the intervention, which is determined through randomization. We first derive a variance formula that corresponds to any particular realization of the randomized sequence and propose efficient algorithms to identify upper and lower bounds of the power. We then obtain an “expected” power based on a first-order approximation to the variance formula, where the expectation is taken with respect to all possible randomization sequences. Finally, we provide a variance formula for more general settings where only the cluster size arithmetic mean and coefficient of variation, instead of exact cluster sizes, are known in the design stage. We evaluate our methods through simulations and illustrate that the average power of a SW-CRT decreases as the variation in cluster sizes increases, and the impact is largest when the number of clusters is small.  相似文献   

5.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

6.
Stepped wedge cluster randomized trials (SWCRT) are increasingly used for the evaluation of complex interventions in health services research. They randomly allocate treatments to clusters that switch to intervention under investigation at variable time points without returning to control condition. The resulting unbalanced allocation over time periods and the uncertainty about the underlying correlation structures at cluster-level renders designing and analyzing SWCRTs a challenge. Adjusting for time trends is recommended, appropriate parameterizations depend on the particular context. For sample size calculation, the covariance structure and covariance parameters are usually assumed to be known. These assumptions greatly affect the influence single cluster-period cells have on the effect estimate. Thus, it is important to understand how cluster-period cells contribute to the treatment effect estimate. We therefore discuss two measures of cell influence. These are functions of the design characteristics and covariance structure only and can thus be calculated at the planning stage: the coefficient matrix as discussed by Matthews and Forbes and information content (IC) as introduced by Kasza and Forbes. The main result is a new formula for IC that is more general and computationally more efficient. The formula applies to any generalized least squares estimator, especially for any type of time trend adjustment or nonblock diagonal matrices. We further show a functional relationship between IC and the coefficient matrix. We give two examples that tie in with current literature. All discussed tools and methods are implemented in the R package SteppedPower .  相似文献   

7.
The genetic mapping of complex traits has been challenging and has required new statistical methods that are robust to misspecified models. Liang et al. proposed a robust multipoint method that can be used to simultaneously estimate, on the basis of sib-pair linkage data, both the position of a trait locus on a chromosome and its effect on disease status. The advantage of their method is that it does not require specification of an underlying genetic model, so estimation of the position of a trait locus on a specified chromosome and of its standard error is robust to a wide variety of genetic mechanisms. If multiple loci influence the trait, the method models the marginal effect of a locus on a specified chromosome. The main critical assumption is that there is only one trait locus on the chromosome of interest. We extend this method to different types of affected relative pairs (ARPs) by two approaches. One approach is to estimate the position of a trait locus yet allow unconstrained trait-locus effects across different types of ARPs. This robust approach allows for differences in sharing alleles identical-by-descent across different types of ARPs. Some examples for which an unconstrained model would apply are differences due to secular changes in diagnostic methods that can change the frequency of phenocopies among different types of relative pairs, environmental factors that modify the genetic effect, epistasis, and variation in marker-information content. However, this unconstrained model requires a parameter for each type of relative pair. To reduce the number of parameters, we propose a second approach that models the marginal effect of a susceptibility locus. This constrained model is robust for a trait caused by either a single locus or by multiple loci without epistasis. To evaluate the adequacy of the constrained model, we developed a robust score statistic. These methods are applied to a prostate cancer-linkage study, which emphasizes their potential advantages and limitations.  相似文献   

8.
Population stratification may confound the results of genetic association studies among unrelated individuals from admixed populations. Several methods have been proposed to estimate the ancestral information in admixed populations and used to adjust the population stratification in genetic association tests. We evaluate the performances of three different methods: maximum likelihood estimation, ADMIXMAP and Structure through various simulated data sets and real data from Latino subjects participating in a genetic study of asthma. All three methods provide similar information on the accuracy of ancestral estimates and control type I error rate at an approximately similar rate. The most important factor in determining accuracy of the ancestry estimate and in minimizing type I error rate is the number of markers used to estimate ancestry. We demonstrate that approximately 100 ancestry informative markers (AIMs) are required to obtain estimates of ancestry that correlate with correlation coefficients more than 0.9 with the true individual ancestral proportions. In addition, after accounting for the ancestry information in association tests, the excess of type I error rate is controlled at the 5% level when 100 markers are used to estimate ancestry. However, since the effect of admixture on the type I error rate worsens with sample size, the accuracy of ancestry estimates also needs to increase to make the appropriate correction. Using data from the Latino subjects, we also apply these methods to an association study between body mass index and 44 AIMs. These simulations are meant to provide some practical guidelines for investigators conducting association studies in admixed populations.  相似文献   

9.
In population-based case-control association studies, the regular chi (2) test is often used to investigate association between a candidate locus and disease. However, it is well known that this test may be biased in the presence of population stratification and/or genotyping error. Unlike some other biases, this bias will not go away with increasing sample size. On the contrary, the false-positive rate will be much larger when the sample size is increased. The usual family-based designs are robust against population stratification, but they are sensitive to genotype error. In this article, we propose a novel method of simultaneously correcting for the bias arising from population stratification and/or for the genotyping error in case-control studies. The appropriate corrections depend on sample odds ratios of the standard 2x3 tables of genotype by case and control from null loci. Therefore, the test is simple to apply. The corrected test is robust against misspecification of the genetic model. If the null hypothesis of no association is rejected, the corrections can be further used to estimate the effect of the genetic factor. We considered a simulation study to investigate the performance of the new method, using parameter values similar to those found in real-data examples. The results show that the corrected test approximately maintains the expected type I error rate under various simulation conditions. It also improves the power of the association test in the presence of population stratification and/or genotyping error. The discrepancy in power between the tests with correction and those without correction tends to be more extreme as the magnitude of the bias becomes larger. Therefore, the bias-correction method proposed in this article should be useful for the genetic analysis of complex traits.  相似文献   

10.
The ability to accurately estimate the sample size required by a stepped‐wedge (SW) cluster randomized trial (CRT) routinely depends upon the specification of several nuisance parameters. If these parameters are misspecified, the trial could be overpowered, leading to increased cost, or underpowered, enhancing the likelihood of a false negative. We address this issue here for cross‐sectional SW‐CRTs, analyzed with a particular linear‐mixed model, by proposing methods for blinded and unblinded sample size reestimation (SSRE). First, blinded estimators for the variance parameters of a SW‐CRT analyzed using the Hussey and Hughes model are derived. Following this, procedures for blinded and unblinded SSRE after any time period in a SW‐CRT are detailed. The performance of these procedures is then examined and contrasted using two example trial design scenarios. We find that if the two key variance parameters were underspecified by 50%, the SSRE procedures were able to increase power over the conventional SW‐CRT design by up to 41%, resulting in an empirical power above the desired level. Thus, though there are practical issues to consider, the performance of the procedures means researchers should consider incorporating SSRE in to future SW‐CRTs.  相似文献   

11.
We develop a Bayes factor-based approach for the design of non-inferiority clinical trials with a focus on controlling type I error and power. Historical data are incorporated in the Bayesian design via the power prior discussed in Ibrahim and Chen (Stat Sci 15:46–60, 2000). The properties of the proposed method are examined in detail. An efficient simulation-based computational algorithm is developed to calculate the Bayes factor, type I error, and power. The proposed methodology is applied to the design of a non-inferiority medical device clinical trial.  相似文献   

12.
Yip PS  Chan KS  Wan EC 《Biometrics》2002,58(4):852-861
We consider the problem of estimating the population size for an open population where the data are collected over secondary periods within primary periods according to a robust design suggested by Pollock (1982, Journal of Wildlife Management 46, 757-760). A conditional likelihood is used to estimate the parameters associated with a generalized linear model in which the capture probability is assumed to have a logistic form depending on individual covariates. A Horvitz-Thompson-type estimator is used to estimate the population size for each primary period and the survival probabilities between primary periods. The asymptotic properties of the proposed estimators are investigated through simulation and are found to perform well. A data set for such a robust design of a small-mammal capture-recapture study conducted at Dummy Bottom within Browns Park National Wildlife Refuge is analyzed.  相似文献   

13.
The stepped wedge cluster randomized trial (SW-CRT) is an increasingly popular design for evaluating health service delivery or policy interventions. An essential consideration of this design is the need to account for both within-period and between-period correlations in sample size calculations. Especially when embedded in health care delivery systems, many SW-CRTs may have subclusters nested in clusters, within which outcomes are collected longitudinally. However, existing sample size methods that account for between-period correlations have not allowed for multiple levels of clustering. We present computationally efficient sample size procedures that properly differentiate within-period and between-period intracluster correlation coefficients in SW-CRTs in the presence of subclusters. We introduce an extended block exchangeable correlation matrix to characterize the complex dependencies of outcomes within clusters. For Gaussian outcomes, we derive a closed-form sample size expression that depends on the correlation structure only through two eigenvalues of the extended block exchangeable correlation structure. For non-Gaussian outcomes, we present a generic sample size algorithm based on linearization and elucidate simplifications under canonical link functions. For example, we show that the approximate sample size formula under a logistic linear mixed model depends on three eigenvalues of the extended block exchangeable correlation matrix. We provide an extension to accommodate unequal cluster sizes and validate the proposed methods via simulations. Finally, we illustrate our methods in two real SW-CRTs with subclusters.  相似文献   

14.
Population-level laterality is generally considered to reflect functional brain specialization. Consequently, the strength of population-level laterality in manipulatory tasks is predicted to positively correlate with task complexity. This relationship has not been investigated in tool manufacture. Here, we report the correlation between strength of laterality and design complexity in the manufacture of New Caledonian crows' three pandanus tool designs: wide, narrow and stepped designs. We documented indirect evidence of over 5,800 tool manufactures on 1,232 pandanus trees at 23 sites. We found that the strength of laterality in tool manufacture was correlated with design complexity in three ways: (i) the strongest effect size among the population-level edge biases for each design was for the more complex, stepped design, (ii) the strength of laterality at individual sites was on average greater for the stepped design than it was for the simpler wide and narrow, non-stepped designs, and (iii) there was a positive, but non-significant, trend for a correlation between the strength of laterality and the number of steps on a stepped tool. These three aspects together indicate that greater design complexity generally elicits stronger lateralization of crows' pandanus tool manufacture.  相似文献   

15.
This paper is to investigate the use of the quasi-likelihood, extended quasi-likelihood, and pseudo-likelihood approach to estimating and testing the mean parameters with respect to two variance models, M1: φ μθ(1+μphis;) and M2: φ μθ(1+τ). Simulation was conducted to compare the bias and standard deviation, and type I error of the Wald tests, based on the model-based and robust variance estimates, using the three semi-parametric approaches under four mixed Poisson models, two variance structures, and two sample sizes. All methods perform reasonably well in terms of bias. Type I error of the Wald test, based on either the model-based or robust estimate, tends to be larger than the nominal level when over-dispersion is moderate. The extended quasi-likelihood method with the variance model M1 performs more consistently in terms of the efficiency and controlling the type I error than with the model M2, and better than the pseudo-likelihood approach with either the M1 or M2 model. The model-based estimate seems to perform better than the robust estimate when the sample size is small.  相似文献   

16.
Araki H  Blouin MS 《Molecular ecology》2005,14(13):4097-4109
Parentage assignment is widely applied to studies on mating systems, population dynamics and natural selection. However, little is known about the consequence of assignment errors, especially when some parents are not sampled. We investigated the effects of two types of error in parentage assignment, failing to assign a true parent (type A) and assigning an untrue parent (type B), on an estimate of the relative reproductive success (RRS) of two groups of parents. Employing a mathematical approach, we found that (i) when all parents are sampled, minimizing either type A or type B error insures the minimum bias on RRS, and (ii) when a large number of parents is not sampled, type B error substantially biases the estimated RRS towards one. Interestingly, however, (iii) when all parents were sampled and both error rates were moderately high, type A error biased the estimated RRS even more than type B error. We propose new methods to obtain an unbiased estimate of RRS and the number of offspring whose parents are not sampled (zW(z)), by correcting the error effects. Applying them to genotypic data from steelhead trout (Oncorhynchus mykiss), we illustrated how to estimate and control the assignment errors. In the data, we observed up to a 30% assignment error and a strong trade-off between the two types of error, depending on the stringency of the assignment decision criterion. We show that our methods can efficiently estimate an unbiased RRS and zW(z) regardless of assignment method, and how to maximize the statistical power to detect a difference in reproductive success between groups.  相似文献   

17.
Song R  Kosorok MR  Cai J 《Biometrics》2008,64(3):741-750
Summary .   Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study.  相似文献   

18.
A robust model matching control of immune response is proposed for therapeutic enhancement to match a prescribed immune response under uncertain initial states and environmental disturbances, including continuous intrusion of exogenous pathogens. The worst-case effect of all possible environmental disturbances and uncertain initial states on the matching for a desired immune response is minimized for the enhanced immune system, i.e. a robust control is designed to track a prescribed immune model response from the minimax matching perspective. This minimax matching problem could herein be transformed to an equivalent dynamic game problem. The exogenous pathogens and environmental disturbances are considered as a player to maximize (worsen) the matching error when the therapeutic control agents are considered as another player to minimize the matching error. Since the innate immune system is highly nonlinear, it is not easy to solve the robust model matching control problem by the nonlinear dynamic game method directly. A fuzzy model is proposed to interpolate several linearized immune systems at different operating points to approximate the innate immune system via smooth fuzzy membership functions. With the help of fuzzy approximation method, the minimax matching control problem of immune systems could be easily solved by the proposed fuzzy dynamic game method via the linear matrix inequality (LMI) technique with the help of Robust Control Toolbox in Matlab. Finally, in silico examples are given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed method.  相似文献   

19.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

20.
The internal pilot study design enables to estimate nuisance parameters required for sample size calculation on the basis of data accumulated in an ongoing trial. By this, misspecifications made when determining the sample size in the planning phase can be corrected employing updated knowledge. According to regulatory guidelines, blindness of all personnel involved in the trial has to be preserved and the specified type I error rate has to be controlled when the internal pilot study design is applied. Especially in the late phase of drug development, most clinical studies are run in more than one centre. In these multicentre trials, one may have to deal with an unequal distribution of the patient numbers among the centres. Depending on the type of the analysis (weighted or unweighted), unequal centre sample sizes may lead to a substantial loss of power. Like the variance, the magnitude of imbalance is difficult to predict in the planning phase. We propose a blinded sample size recalculation procedure for the internal pilot study design in multicentre trials with normally distributed outcome and two balanced treatment groups that are analysed applying the weighted or the unweighted approach. The method addresses both uncertainty with respect to the variance of the endpoint and the extent of disparity of the centre sample sizes. The actual type I error rate as well as the expected power and sample size of the procedure is investigated in simulation studies. For the weighted analysis as well as for the unweighted analysis, the maximal type I error rate was not or only minimally exceeded. Furthermore, application of the proposed procedure led to an expected power that achieves the specified value in many cases and is throughout very close to it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号