首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The rigorous evaluation of the impact of combination HIV prevention packages at the population level will be critical for the future of HIV prevention. In this review, we discuss important considerations for the design and interpretation of cluster randomized controlled trials (C-RCTs) of combination prevention interventions. We focus on three large C-RCTs that will start soon and are designed to test the hypothesis that combination prevention packages, including expanded access to antiretroviral therapy, can substantially reduce HIV incidence. Using a general framework to integrate mathematical modelling analysis into the design, conduct, and analysis of C-RCTs will complement traditional statistical analyses and strengthen the evaluation of the interventions. Importantly, even with combination interventions, it may be challenging to substantially reduce HIV incidence over the 2- to 3-y duration of a C-RCT, unless interventions are scaled up rapidly and key populations are reached. Thus, we propose the innovative use of mathematical modelling to conduct interim analyses, when interim HIV incidence data are not available, to allow the ongoing trials to be modified or adapted to reduce the likelihood of inconclusive outcomes. The preplanned, interactive use of mathematical models during C-RCTs will also provide a valuable opportunity to validate and refine model projections.  相似文献   

2.

Objective

Systematic reviews can include cluster-randomised controlled trials (C-RCTs), which require different analysis compared with standard individual-randomised controlled trials. However, it is not known whether review authors follow the methodological and reporting guidance when including these trials. The aim of this study was to assess the methodological and reporting practice of Cochrane reviews that included C-RCTs against criteria developed from existing guidance.

Methods

Criteria were developed, based on methodological literature and personal experience supervising review production and quality. Criteria were grouped into four themes: identifying, reporting, assessing risk of bias, and analysing C-RCTs. The Cochrane Database of Systematic Reviews was searched (2nd December 2013), and the 50 most recent reviews that included C-RCTs were retrieved. Each review was then assessed using the criteria.

Results

The 50 reviews we identified were published by 26 Cochrane Review Groups between June 2013 and November 2013. For identifying C-RCTs, only 56% identified that C-RCTs were eligible for inclusion in the review in the eligibility criteria. For reporting C-RCTs, only eight (24%) of the 33 reviews reported the method of cluster adjustment for their included C-RCTs. For assessing risk of bias, only one review assessed all five C-RCT-specific risk-of-bias criteria. For analysing C-RCTs, of the 27 reviews that presented unadjusted data, only nine (33%) provided a warning that confidence intervals may be artificially narrow. Of the 34 reviews that reported data from unadjusted C-RCTs, only 13 (38%) excluded the unadjusted results from the meta-analyses.

Conclusions

The methodological and reporting practices in Cochrane reviews incorporating C-RCTs could be greatly improved, particularly with regard to analyses. Criteria developed as part of the current study could be used by review authors or editors to identify errors and improve the quality of published systematic reviews incorporating C-RCTs.  相似文献   

3.
Meta-regression is widely used in systematic reviews to investigate sources of heterogeneity and the association of study-level covariates with treatment effectiveness. Existing meta-regression approaches are successful in adjusting for baseline covariates, which include real study-level covariates (e.g., publication year) that are invariant within a study and aggregated baseline covariates (e.g., mean age) that differ for each participant but are measured before randomization within a study. However, these methods have several limitations in adjusting for post-randomization variables. Although post-randomization variables share a handful of similarities with baseline covariates, they differ in several aspects. First, baseline covariates can be aggregated at the study level presumably because they are assumed to be balanced by the randomization, while post-randomization variables are not balanced across arms within a study and are commonly aggregated at the arm level. Second, post-randomization variables may interact dynamically with the primary outcome. Third, unlike baseline covariates, post-randomization variables are themselves often important outcomes under investigation. In light of these differences, we propose a Bayesian joint meta-regression approach adjusting for post-randomization variables. The proposed method simultaneously estimates the treatment effect on the primary outcome and on the post-randomization variables. It takes into consideration both between- and within-study variability in post-randomization variables. Studies with missing data in either the primary outcome or the post-randomization variables are included in the joint model to improve estimation. Our method is evaluated by simulations and a real meta-analysis of major depression disorder treatments.  相似文献   

4.
In this paper, we describe a new restricted randomization method called run-reversal equilibrium (RRE), which is a Nash equilibrium of a game where (1) the clinical trial statistician chooses a sequence of medical treatments, and (2) clinical investigators make treatment predictions. RRE randomization counteracts how each investigator could observe treatment histories in order to forecast upcoming treatments. Computation of a run-reversal equilibrium reflects how the treatment history at a particular site is imperfectly correlated with the treatment imbalance for the overall trial. An attractive feature of RRE randomization is that treatment imbalance follows a random walk at each site, while treatment balance is tightly constrained and regularly restored for the overall trial. Less predictable and therefore more scientifically valid experiments can be facilitated by run-reversal equilibrium for multi-site clinical trials.  相似文献   

5.
In many experiments, researchers would like to compare between treatments and outcome that only exists in a subset of participants selected after randomization. For example, in preventive HIV vaccine efficacy trials it is of interest to determine whether randomization to vaccine causes lower HIV viral load, a quantity that only exists in participants who acquire HIV. To make a causal comparison and account for potential selection bias we propose a sensitivity analysis following the principal stratification framework set forth by Frangakis and Rubin (2002, Biometrics58, 21-29). Our goal is to assess the average causal effect of treatment assignment on viral load at a given baseline covariate level in the always infected principal stratum (those who would have been infected whether they had been assigned to vaccine or placebo). We assume stable unit treatment values (SUTVA), randomization, and that subjects randomized to the vaccine arm who became infected would also have become infected if randomized to the placebo arm (monotonicity). It is not known which of those subjects infected in the placebo arm are in the always infected principal stratum, but this can be modeled conditional on covariates, the observed viral load, and a specified sensitivity parameter. Under parametric regression models for viral load, we obtain maximum likelihood estimates of the average causal effect conditional on covariates and the sensitivity parameter. We apply our methods to the world's first phase III HIV vaccine trial.  相似文献   

6.
“Covariate adjustment” in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates”). The baseline variables could include, for example, age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We focus on the analysis of covariance (ANCOVA) estimator, which involves fitting a linear model for the outcome given the treatment arm and baseline variables, and trials that use simple randomization with equal probability of assignment to treatment and control. We prove the following new (to the best of our knowledge) robustness property of ANCOVA to arbitrary model misspecification: Not only is the ANCOVA point estimate consistent (as proved by Yang and Tsiatis, 2001) but so is its standard error. This implies that confidence intervals and hypothesis tests conducted as if the linear model were correct are still asymptotically valid even when the linear model is arbitrarily misspecified, for example, when the baseline variables are nonlinearly related to the outcome or there is treatment effect heterogeneity. We also give a simple, robust formula for the variance reduction (equivalently, sample size reduction) from using ANCOVA. By reanalyzing completed randomized trials for mild cognitive impairment, schizophrenia, and depression, we demonstrate how ANCOVA can achieve variance reductions of 4 to 32%.  相似文献   

7.
8.
In randomized trials, the treatment influences not only endpoints but also other variables measured after randomization which, when used as covariates to adjust for the observed imbalance, become pseudo‐covariates. There is a logical circularity in adjusting for a pseudo‐covariate because the variability in the endpoint that is attributed not to the treatment but rather to the pseudo‐covariate may actually represent an effect of the treatment modulated by the pseudo‐covariate. This potential bias is well known, but we offer new insight into how it can lead to reversals in the direction of the apparent treatment effect by way of stage migration. We then discuss a related problem that is not generally appreciated, specifically how the absence of allocation concealment can lead to this reversal of the direction of the apparent treatment effect even when adjustment is for a true covariate measured prior to randomization. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
The most common objective for response-adaptive clinical trials is to seek to ensure that patients within a trial have a high chance of receiving the best treatment available by altering the chance of allocation on the basis of accumulating data. Approaches that yield good patient benefit properties suffer from low power from a frequentist perspective when testing for a treatment difference at the end of the study due to the high imbalance in treatment allocations. In this work we develop an alternative pairwise test for treatment difference on the basis of allocation probabilities of the covariate-adjusted response-adaptive randomization with forward-looking Gittins Index (CARA-FLGI) Rule for binary responses. The performance of the novel test is evaluated in simulations for two-armed studies and then its applications to multiarmed studies are illustrated. The proposed test has markedly improved power over the traditional Fisher exact test when this class of nonmyopic response adaptation is used. We also find that the test's power is close to the power of a Fisher exact test under equal randomization.  相似文献   

10.
Follmann DA  Proschan MA 《Biometrics》1999,55(4):1151-1155
An important issue in clinical trials is whether the effect of treatment is essentially homogeneous as a function of baseline covariates. Covariates that have the potential for an interaction with treatment may be suspected on the basis of treatment mechanism or may be known risk factors, as it is often thought that the sickest patients may benefit most from treatment. If disease severity is more accurately determined by a collection of baseline covariates rather than a single risk factor, methods that examine each covariate in turn for interaction may be inadequate. We propose a procedure whereby treatment interaction is examined along a single severity index that is a linear combination of baseline covariates. Formally, we derive a likelihood ratio test based on the null beta0 = beta1 versus the alternative abeta0 = beta1, where X'beta(k) (k = 0, 1) corresponds to the severity index in arm k and X is a vector of baseline covariates. While our explicit test requires a Gaussian response, it can be readily implemented whenever the estimates of beta0,beta1 are approximately multivariate normal. For example, it is appropriate for large clinical trials where beta(k) is based on a logisitic or Cox regression of response on X.  相似文献   

11.
In randomized clinical trials involving survival time, a challenge that arises frequently, for example, in cancer studies (Manegold, Symanowski, Gatzemeier, Reck, von Pawel, Kortsik, Nackaerts, Lianes and Vogelzang, 2005. Second-line (post-study) chemotherapy received by patients treated in the phase III trial of pemetrexed plus cisplatin versus cisplatin alone in malignant pleural mesothelioma. Annals of Oncology 16, 923--927), is that subjects may initiate secondary treatments during the follow-up. The marginal structural Cox model and the method of inverse probability of treatment weighting (IPTW) have been proposed, originally for observational studies, to make causal inference on time-dependent treatments. In this paper, we adopt the marginal structural Cox model and propose an inferential method that improves the efficiency of the usual IPTW method by tailoring it to the setting of randomized clinical trials. The improvement in efficiency does not depend on any additional assumptions other than those required by the IPTW method, which is achieved by exploiting the knowledge that the study treatment is independent of baseline covariates due to randomization. The finite-sample performance of the proposed method is demonstrated via simulations and by application to data from a cancer clinical trial.  相似文献   

12.
Repeated cross-sectional cluster randomization trials are cluster randomization trials in which the response variable is measured on a sample of subjects from each cluster at baseline and on a different sample of subjects from each cluster at follow-up. One can estimate the effect of the intervention on the follow-up response alone, on the follow-up responses after adjusting for baseline responses, or on the change in the follow-up response from the baseline response. We used Monte Carlo simulations to determine the relative statistical power of different methods of analysis. We examined methods of analysis based on generalized estimating equations (GEE) and a random effects model to account for within-cluster homogeneity. We also examined cluster-level analyses that treated the cluster as the unit of analysis. We found that the use of random effects models to estimate the effect of the intervention on the change in the follow-up response from the baseline response had lower statistical power compared to the other competing methods across a wide range of scenarios. The other methods tended to have similar statistical power in many settings. However, in some scenarios, those analyses that adjusted for the baseline response tended to have marginally greater power than did methods that did not account for the baseline response.  相似文献   

13.
C B Begg  L A Kalish 《Biometrics》1984,40(2):409-420
Many clinical trials have a binary outcome variable. If covariate adjustment is necessary in the analysis, the logistic-regression model is frequently used. Optimal designs for allocating treatments for this model, or for any nonlinear or heteroscedastic model, are generally unbalanced with regard to overall treatment totals and totals within strata. However, all treatment-allocation methods that have been recommended for clinical trials in the literature are designed to balance treatments within strata, either directly or asymptotically. In this paper, the efficiencies of balanced sequential allocation schemes are measured relative to sequential Ds-optimal designs for the logistic model, using as examples completed trials conducted by the Eastern Cooperative Oncology Group and systematic simulations. The results demonstrate that stratified, balanced designs are quite efficient, in general. However, complete randomization is frequently inefficient, and will occasionally result in a trial that is very inefficient.  相似文献   

14.
Randomized trials with continuous outcomes are often analyzed using analysis of covariance (ANCOVA), with adjustment for prognostic baseline covariates. The ANCOVA estimator of the treatment effect is consistent under arbitrary model misspecification. In an article recently published in the journal, Wang et al proved the model-based variance estimator for the treatment effect is also consistent under outcome model misspecification, assuming the probability of randomization to each treatment is 1/2. In this reader reaction, we derive explicit expressions which show that when randomization is unequal, the model-based variance estimator can be biased upwards or downwards. In contrast, robust sandwich variance estimators can provide asymptotically valid inferences under arbitrary misspecification, even when randomization probabilities are not equal.  相似文献   

15.
Summary Minimization as an alternative to randomization is gaining popularity for small clinical trials. In response to critics’ questions about the proper analysis of such a trial, proponents have argued that a rerandomization approach, akin to a permutation test with conventional randomization, can be used. However, they add that this computationally intensive approach is not necessary because its results are very similar to those of a t ‐test or test of proportions unless the sample size is very small. We show that minimization applied with unequal allocation causes problems that challenge this conventional wisdom.  相似文献   

16.
Zhang L  Rosenberger WF 《Biometrics》2006,62(2):562-569
We provide an explicit asymptotic method to evaluate the performance of different response-adaptive randomization procedures in clinical trials with continuous outcomes. We use this method to investigate four different response-adaptive randomization procedures. Their performance, especially in power and treatment assignment skewing to the better treatment, is thoroughly evaluated theoretically. These results are then verified by simulation. Our analysis concludes that the doubly adaptive biased coin design procedure targeting optimal allocation is the best one for practical use. We also consider the effect of delay in responses and nonstandard responses, for example, Cauchy distributed response. We illustrate our procedure by redesigning a real clinical trial.  相似文献   

17.
Zhou B  Latouche A  Rocha V  Fine J 《Biometrics》2011,67(2):661-670
For competing risks data, the Fine-Gray proportional hazards model for subdistribution has gained popularity for its convenience in directly assessing the effect of covariates on the cumulative incidence function. However, in many important applications, proportional hazards may not be satisfied, including multicenter clinical trials, where the baseline subdistribution hazards may not be common due to varying patient populations. In this article, we consider a stratified competing risks regression, to allow the baseline hazard to vary across levels of the stratification covariate. According to the relative size of the number of strata and strata sizes, two stratification regimes are considered. Using partial likelihood and weighting techniques, we obtain consistent estimators of regression parameters. The corresponding asymptotic properties and resulting inferences are provided for the two regimes separately. Data from a breast cancer clinical trial and from a bone marrow transplantation registry illustrate the potential utility of the stratified Fine-Gray model.  相似文献   

18.
Restricted randomization designs in clinical trials.   总被引:4,自引:0,他引:4  
R Simon 《Biometrics》1979,35(2):503-512
Though therapeutic clinical trials are often categorized as using either "randomization" or "historical controls" as a basis for treatment evaluation, pure random assignment of treatments is rarely employed. Instead various restricted randomization designs are used. The restrictions include the balancing of treatment assignments over time and the stratification of the assignment with regard to covariates that may affect response. Restricted randomization designs for clinical trials differ from those of other experimental areas because patients arrive sequentially and a balanced design cannot be ensured. The major restricted randomization designs and arguments concerning the proper role of stratification are reviewed here. The effect of randomization restrictions on the validity of significance tests is discussed.  相似文献   

19.
In a typical comparative clinical trial the randomization scheme is fixed at the beginning of the study, and maintained throughout the course of the trial. A number of researchers have championed a randomized trial design referred to as ‘outcome‐adaptive randomization.’ In this type of trial, the likelihood of a patient being enrolled to a particular arm of the study increases or decreases as preliminary information becomes available suggesting that treatment may be superior or inferior. While the design merits of outcome‐adaptive trials have been debated, little attention has been paid to significant ethical concerns that arise in the conduct of such studies. These include loss of equipoise, lack of processes for adequate informed consent, and inequalities inherent in the research design which could lead to perceptions of injustice that may have negative implications for patients and the research enterprise. This article examines the ethical difficulties inherent in outcome‐adaptive trials.  相似文献   

20.
Optimal multivariate matching before randomization   总被引:1,自引:0,他引:1  
Although blocking or pairing before randomization is a basic principle of experimental design, the principle is almost invariably applied to at most one or two blocking variables. Here, we discuss the use of optimal multivariate matching prior to randomization to improve covariate balance for many variables at the same time, presenting an algorithm and a case-study of its performance. The method is useful when all subjects, or large groups of subjects, are randomized at the same time. Optimal matching divides a single group of 2n subjects into n pairs to minimize covariate differences within pairs-the so-called nonbipartite matching problem-then one subject in each pair is picked at random for treatment, the other being assigned to control. Using the baseline covariate data for 132 patients from an actual, unmatched, randomized experiment, we construct 66 pairs matching for 14 covariates. We then create 10000 unmatched and 10000 matched randomized experiments by repeatedly randomizing the 132 patients, and compare the covariate balance with and without matching. By every measure, every one of the 14 covariates was substantially better balanced when randomization was performed within matched pairs. Even after covariance adjustment for chance imbalances in the 14 covariates, matched randomizations provided more accurate estimates than unmatched randomizations, the increase in accuracy being equivalent to, on average, a 7% increase in sample size. In randomization tests of no treatment effect, matched randomizations using the signed rank test had substantially higher power than unmatched randomizations using the rank sum test, even when only 2 of 14 covariates were relevant to a simulated response. Unmatched randomizations experienced rare disasters which were consistently avoided by matched randomizations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号