首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
    
Vaccines with limited ability to prevent HIV infection may positively impact the HIV/AIDS pandemic by preventing secondary transmission and disease in vaccine recipients who become infected. To evaluate the impact of vaccination on secondary transmission and disease, efficacy trials assess vaccine effects on HIV viral load and other surrogate endpoints measured after infection. A standard test that compares the distribution of viral load between the infected subgroups of vaccine and placebo recipients does not assess a causal effect of vaccine, because the comparison groups are selected after randomization. To address this problem, we formulate clinically relevant causal estimands using the principal stratification framework developed by Frangakis and Rubin (2002, Biometrics 58, 21-29), and propose a class of logistic selection bias models whose members identify the estimands. Given a selection model in the class, procedures are developed for testing and estimation of the causal effect of vaccination on viral load in the principal stratum of subjects who would be infected regardless of randomization assignment. We show how the procedures can be used for a sensitivity analysis that quantifies how the causal effect of vaccination varies with the presumed magnitude of selection bias.  相似文献   

3.
    
In many experiments, researchers would like to compare between treatments and outcome that only exists in a subset of participants selected after randomization. For example, in preventive HIV vaccine efficacy trials it is of interest to determine whether randomization to vaccine causes lower HIV viral load, a quantity that only exists in participants who acquire HIV. To make a causal comparison and account for potential selection bias we propose a sensitivity analysis following the principal stratification framework set forth by Frangakis and Rubin (2002, Biometrics58, 21-29). Our goal is to assess the average causal effect of treatment assignment on viral load at a given baseline covariate level in the always infected principal stratum (those who would have been infected whether they had been assigned to vaccine or placebo). We assume stable unit treatment values (SUTVA), randomization, and that subjects randomized to the vaccine arm who became infected would also have become infected if randomized to the placebo arm (monotonicity). It is not known which of those subjects infected in the placebo arm are in the always infected principal stratum, but this can be modeled conditional on covariates, the observed viral load, and a specified sensitivity parameter. Under parametric regression models for viral load, we obtain maximum likelihood estimates of the average causal effect conditional on covariates and the sensitivity parameter. We apply our methods to the world's first phase III HIV vaccine trial.  相似文献   

4.
The compliance score in randomized trials is a measure of the effect of randomization on treatment received. It is in principle a group-level pretreatment variable and so can be used where individual-level measures of treatment received can produce misleading inferences. The interpretation of models with the compliance score as a regressor of interest depends on the link function. Using the identity link can lead to valid inference about the effects of treatment received even in the presence of nonrandom noncompliance; such inference is more problematic for nonlinear links. We illustrate these points with data from two randomized trials.  相似文献   

5.
Many randomized experiments suffer from noncompliance. Some of these experiments, so-called encouragement designs, can be expected to have especially large amounts of noncompliance, because encouragement to take the treatment rather than the treatment itself is randomly assigned to individuals. We present an extended framework for the analysis of data from such experiments with a binary treatment, binary encouragement, and background covariates. There are two key features of this framework: we use an instrumental variables approach to link intention-to-treat effects to treatment effects and we adopt a Bayesian approach for inference and sensitivity analysis. This framework is illustrated in a medical example concerning the effects of inoculation for influenza. In this example, the analyses suggest that positive estimates of the intention-to-treat effect need not be due to the treatment itself, but rather to the encouragement to take the treatment: the intention-to-treat effect for the subpopulation who would be inoculated whether or not encouraged is estimated to be approximately as large as the intention-to-treat effect for the subpopulation whose inoculation status would agree with their (randomized) encouragement status whether or not encouraged. Thus, our methods suggest that global intention-to-treat estimates, although often regarded as conservative, can be too coarse and even misleading when taken as summarizing the evidence in the data for the effects of treatments.  相似文献   

6.
    
Shepherd BE  Gilbert PB  Dupont CT 《Biometrics》2011,67(3):1100-1110
In randomized studies researchers may be interested in the effect of treatment assignment on a time-to-event outcome that only exists in a subset selected after randomization. For example, in preventative HIV vaccine trials, it is of interest to determine whether randomization to vaccine affects the time from infection diagnosis until initiation of antiretroviral therapy. Earlier work assessed the effect of treatment on outcome among the principal stratum of individuals who would have been selected regardless of treatment assignment. These studies assumed monotonicity, that one of the principal strata was empty (e.g., every person infected in the vaccine arm would have been infected if randomized to placebo). Here, we present a sensitivity analysis approach for relaxing monotonicity with a time-to-event outcome. We also consider scenarios where selection is unknown for some subjects because of noninformative censoring (e.g., infection status k years after randomization is unknown for some because of staggered study entry). We illustrate our method using data from an HIV vaccine trial.  相似文献   

7.
    
Frangakis CE  Baker SG 《Biometrics》2001,57(3):899-908
For studies with treatment noncompliance, analyses have been developed recently to better estimate treatment efficacy. However, the advantage and cost of measuring compliance data have implications on the study design that have not been as systematically explored. In order to estimate better treatment efficacy with lower cost, we propose a new class of compliance subsampling (CSS) designs where, after subjects are assigned treatment, compliance behavior is measured for only subgroups of subjects. The sizes of the subsamples are allowed to relate to the treatment assignment, the assignment probability, the total sample size, the anticipated distributions of outcome and compliance, and the cost parameters of the study. The CSS design methods relate to prior work (i) on two-phase designs in which a covariate is subsampled and (ii) on causal inference because the subsampled postrandomization compliance behavior is not the true covariate of interest. For each CSS design, we develop efficient estimation of treatment efficacy under binary outcome and all-or-none observed compliance. Then we derive a minimal cost CSS design that achieves a required precision for estimating treatment efficacy. We compare the properties of the CSS design to those of conventional protocols in a study of patient choices for medical care at the end of life.  相似文献   

8.
9.
    
We consider studies of cohorts of individuals after a critical event, such as an injury, with the following characteristics. First, the studies are designed to measure \"input\" variables, which describe the period before the critical event, and to characterize the distribution of the input variables in the cohort. Second, the studies are designed to measure \"output\" variables, primarily mortality after the critical event, and to characterize the predictive (conditional) distribution of mortality given the input variables in the cohort. Such studies often possess the complication that the input data are missing for those who die shortly after the critical event because the data collection takes place after the event. Standard methods of dealing with the missing inputs, such as imputation or weighting methods based on an assumption of ignorable missingness, are known to be generally invalid when the missingness of inputs is nonignorable, that is, when the distribution of the inputs is different between those who die and those who live. To address this issue, we propose a novel design that obtains and uses information on an additional key variable-a treatment or externally controlled variable, which if set at its \"effective\" level, could have prevented the death of those who died. We show that the new design can be used to draw valid inferences for the marginal distribution of inputs in the entire cohort, and for the conditional distribution of mortality given the inputs, also in the entire cohort, even under nonignorable missingness. The crucial framework that we use is principal stratification based on the potential outcomes, here mortality under both levels of treatment. We also show using illustrative preliminary injury data that our approach can reveal results that are more reasonable than the results of standard methods, in relatively dramatic ways. Thus, our approach suggests that the routine collection of data on variables that could be used as possible treatments in such studies of inputs and mortality should become common.  相似文献   

10.
    
Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non‐parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non‐parametric method. We illustrate our method with two medical data sets.  相似文献   

11.
    
In a recent article on the efficacy of antihypertensive therapy, Berlowitz et al. (1998, New England Journal of Medicine 339, 1957-1963) introduced an ad hoc method of adjusting for serial confounding assessed via an intensity score, which records cumulative differences over time between therapy actually received and therapy predicted by prior medical history. Outcomes are subsequently regressed on the intensity score and baseline covariates to determine whether intense treatment or exposure predicts a favorable response. We use a structural nested mean model to derive conditions sufficient for interpreting the Berlowitz results causally. We also consider a modified approach that scales the intensity at each time by the inverse expected treatment given prior medical history. This leads to a simple, two-step implementation of G-estimation if we assume a nonstandard but useful structural nested mean model in which subjects less likely to receive treatment are more likely to benefit from it. These modeling assumptions apply, for example, to health services research contexts in which differential access to care is a primary concern. They are also plausible in our analysis of the causal effect of potent antiretroviral therapy on change in CD4 cell count, because men in the sample who are less likely to initiate treatment when baseline CD4 counts are high are more likely to experience large positive changes. We further extend the methods to accomodate repeated outcomes and time-varying effects of time-varying exposures.  相似文献   

12.
Mendelian randomization methods, which use genetic variants as instrumental variables for exposures of interest to overcome problems of confounding and reverse causality, are becoming widespread for assessing causal relationships in epidemiological studies. The main purpose of this paper is to demonstrate how results can be biased if researchers select genetic variants on the basis of their association with the exposure in their own dataset, as often happens in candidate gene analyses. This can lead to estimates that indicate apparent “causal” relationships, despite there being no true effect of the exposure. In addition, we discuss the potential bias in estimates of magnitudes of effect from Mendelian randomization analyses when the measured exposure is a poor proxy for the true underlying exposure. We illustrate these points with specific reference to tobacco research.  相似文献   

13.
    
Zigler CM  Belin TR 《Biometrics》2012,68(3):922-932
Summary The literature on potential outcomes has shown that traditional methods for characterizing surrogate endpoints in clinical trials based only on observed quantities can fail to capture causal relationships between treatments, surrogates, and outcomes. Building on the potential-outcomes formulation of a principal surrogate, we introduce a Bayesian method to estimate the causal effect predictiveness (CEP) surface and quantify a candidate surrogate's utility for reliably predicting clinical outcomes. In considering the full joint distribution of all potentially observable quantities, our Bayesian approach has the following features. First, our approach illuminates implicit assumptions embedded in previously-used estimation strategies that have been shown to result in poor performance. Second, our approach provides tools for making explicit and scientifically-interpretable assumptions regarding associations about which observed data are not informative. Through simulations based on an HIV vaccine trial, we found that the Bayesian approach can produce estimates of the CEP surface with improved performance compared to previous methods. Third, our approach can extend principal-surrogate estimation beyond the previously considered setting of a vaccine trial where the candidate surrogate is constant in one arm of the study. We illustrate this extension through an application to an AIDS therapy trial where the candidate surrogate varies in both treatment arms.  相似文献   

14.
    
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

15.
    
Follmann D 《Biometrics》2006,62(4):1161-1169
This article introduces methods for use in vaccine clinical trials to help determine whether the immune response to a vaccine is actually causing a reduction in the infection rate. This is not easy because immune response to the (say HIV) vaccine is only observed in the HIV vaccine arm. If we knew what the HIV-specific immune response in placebo recipients would have been, had they been vaccinated, this immune response could be treated essentially like a baseline covariate and an interaction with treatment could be evaluated. Relatedly, the rate of infection by this baseline covariate could be compared between the two groups and a causative role of immune response would be supported if infection risk decreased with increasing HIV immune response only in the vaccine group. We introduce two methods for inferring this HIV-specific immune response. The first involves vaccinating everyone before baseline with an irrelevant vaccine, for example, rabies. Randomization ensures that the relationship between the immune responses to the rabies and HIV vaccines observed in the vaccine group is the same as what would have been seen in the placebo group. We infer a placebo volunteer's response to the HIV vaccine using their rabies response and a prediction model from the vaccine group. The second method entails vaccinating all uninfected placebo patients at the closeout of the trial with the HIV vaccine and recording immune response. We pretend this immune response at closeout is what they would have had at baseline. We can then infer what the distribution of immune response among placebo infecteds would have been. Such designs may help elucidate the role of immune response in preventing infections. More pointedly, they could be helpful in the decision to improve or abandon an HIV vaccine with mediocre performance in a phase III trial.  相似文献   

16.
17.
    
Gilbert PB  Hudgens MG 《Biometrics》2008,64(4):1146-1154
SUMMARY: Frangakis and Rubin (2002, Biometrics 58, 21-29) proposed a new definition of a surrogate endpoint (a \"principal\" surrogate) based on causal effects. We introduce an estimand for evaluating a principal surrogate, the causal effect predictiveness (CEP) surface, which quantifies how well causal treatment effects on the biomarker predict causal treatment effects on the clinical endpoint. Although the CEP surface is not identifiable due to missing potential outcomes, it can be identified by incorporating a baseline covariate(s) that predicts the biomarker. Given case-cohort sampling of such a baseline predictor and the biomarker in a large blinded randomized clinical trial, we develop an estimated likelihood method for estimating the CEP surface. This estimation assesses the \"surrogate value\" of the biomarker for reliably predicting clinical treatment effects for the same or similar setting as the trial. A CEP surface plot provides a way to compare the surrogate value of multiple biomarkers. The approach is illustrated by the problem of assessing an immune response to a vaccine as a surrogate endpoint for infection.  相似文献   

18.
    
  相似文献   

19.
    
Mehrotra DV  Li X  Gilbert PB 《Biometrics》2006,62(3):893-900
To support the design of the world's first proof-of-concept (POC) efficacy trial of a cell-mediated immunity-based HIV vaccine, we evaluate eight methods for testing the composite null hypothesis of no-vaccine effect on either the incidence of HIV infection or the viral load set point among those infected, relative to placebo. The first two methods use a single test applied to the actual values or ranks of a burden-of-illness (BOI) outcome that combines the infection and viral load endpoints. The other six methods combine separate tests for the two endpoints using unweighted or weighted versions of the two-part z, Simes', and Fisher's methods. Based on extensive simulations that were used to design the landmark POC trial, the BOI methods are shown to have generally low power for rejecting the composite null hypothesis (and hence advancing the vaccine to a subsequent large-scale efficacy trial). The unweighted Simes' and Fisher's combination methods perform best overall. Importantly, this conclusion holds even after the test for the viral load component is adjusted for bias that can be introduced by conditioning on a postrandomization event (HIV infection). The adjustment is derived using a selection bias model based on the principal stratification framework of causal inference.  相似文献   

20.
    
Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号