首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Gilbert PB  Hudgens MG 《Biometrics》2008,64(4):1146-1154
SUMMARY: Frangakis and Rubin (2002, Biometrics 58, 21-29) proposed a new definition of a surrogate endpoint (a "principal" surrogate) based on causal effects. We introduce an estimand for evaluating a principal surrogate, the causal effect predictiveness (CEP) surface, which quantifies how well causal treatment effects on the biomarker predict causal treatment effects on the clinical endpoint. Although the CEP surface is not identifiable due to missing potential outcomes, it can be identified by incorporating a baseline covariate(s) that predicts the biomarker. Given case-cohort sampling of such a baseline predictor and the biomarker in a large blinded randomized clinical trial, we develop an estimated likelihood method for estimating the CEP surface. This estimation assesses the "surrogate value" of the biomarker for reliably predicting clinical treatment effects for the same or similar setting as the trial. A CEP surface plot provides a way to compare the surrogate value of multiple biomarkers. The approach is illustrated by the problem of assessing an immune response to a vaccine as a surrogate endpoint for infection.  相似文献   

2.
In randomized trials with noncompliance, causal effects cannot be identified without strong assumptions. Therefore, several authors have considered bounds on the causal effects. Applying an idea of VanderWeele ( 2008 ), Chiba ( 2009 ) gave bounds on the average causal effects in randomized trials with noncompliance using the information on the randomized assignment, the treatment received and the outcome under monotonicity assumptions about covariates. But he did not consider any observed covariates. If there are some observed covariates such as age, gender, and race in a trial, we propose new bounds using the observed covariate information under some monotonicity assumptions similar to those of VanderWeele and Chiba. And we compare the three bounds in a real example.  相似文献   

3.
In many experiments, researchers would like to compare between treatments and outcome that only exists in a subset of participants selected after randomization. For example, in preventive HIV vaccine efficacy trials it is of interest to determine whether randomization to vaccine causes lower HIV viral load, a quantity that only exists in participants who acquire HIV. To make a causal comparison and account for potential selection bias we propose a sensitivity analysis following the principal stratification framework set forth by Frangakis and Rubin (2002, Biometrics58, 21-29). Our goal is to assess the average causal effect of treatment assignment on viral load at a given baseline covariate level in the always infected principal stratum (those who would have been infected whether they had been assigned to vaccine or placebo). We assume stable unit treatment values (SUTVA), randomization, and that subjects randomized to the vaccine arm who became infected would also have become infected if randomized to the placebo arm (monotonicity). It is not known which of those subjects infected in the placebo arm are in the always infected principal stratum, but this can be modeled conditional on covariates, the observed viral load, and a specified sensitivity parameter. Under parametric regression models for viral load, we obtain maximum likelihood estimates of the average causal effect conditional on covariates and the sensitivity parameter. We apply our methods to the world's first phase III HIV vaccine trial.  相似文献   

4.
Vaccines with limited ability to prevent HIV infection may positively impact the HIV/AIDS pandemic by preventing secondary transmission and disease in vaccine recipients who become infected. To evaluate the impact of vaccination on secondary transmission and disease, efficacy trials assess vaccine effects on HIV viral load and other surrogate endpoints measured after infection. A standard test that compares the distribution of viral load between the infected subgroups of vaccine and placebo recipients does not assess a causal effect of vaccine, because the comparison groups are selected after randomization. To address this problem, we formulate clinically relevant causal estimands using the principal stratification framework developed by Frangakis and Rubin (2002, Biometrics 58, 21-29), and propose a class of logistic selection bias models whose members identify the estimands. Given a selection model in the class, procedures are developed for testing and estimation of the causal effect of vaccination on viral load in the principal stratum of subjects who would be infected regardless of randomization assignment. We show how the procedures can be used for a sensitivity analysis that quantifies how the causal effect of vaccination varies with the presumed magnitude of selection bias.  相似文献   

5.
Summary .   We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered.  相似文献   

6.
Summary .  We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics 63, 465–474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data.  相似文献   

7.
When the true end points (T) are difficult or costly to measure, surrogate markers (S) are often collected in clinical trials to help predict the effect of the treatment (Z). There is great interest in understanding the relationship among S, T, and Z. A principal stratification (PS) framework has been proposed by Frangakis and Rubin (2002) to study their causal associations. In this paper, we extend the framework to a multiple trial setting and propose a Bayesian hierarchical PS model to assess surrogacy. We apply the method to data from a large collection of colon cancer trials in which S and T are binary. We obtain the trial-specific causal measures among S, T, and Z, as well as their overall population-level counterparts that are invariant across trials. The method allows for information sharing across trials and reduces the nonidentifiability problem. We examine the frequentist properties of our model estimates and the impact of the monotonicity assumption using simulations. We also illustrate the challenges in evaluating surrogacy in the counterfactual framework that result from nonidentifiability.  相似文献   

8.
We focus on the problem of generalizing a causal effect estimated on a randomized controlled trial (RCT) to a target population described by a set of covariates from observational data. Available methods such as inverse propensity sampling weighting are not designed to handle missing values, which are however common in both data sources. In addition to coupling the assumptions for causal effect identifiability and for the missing values mechanism and to defining appropriate estimation strategies, one difficulty is to consider the specific structure of the data with two sources and treatment and outcome only available in the RCT. We propose three multiple imputation strategies to handle missing values when generalizing treatment effects, each handling the multisource structure of the problem differently (separate imputation, joint imputation with fixed effect, joint imputation ignoring source information). As an alternative to multiple imputation, we also propose a direct estimation approach that treats incomplete covariates as semidiscrete variables. The multiple imputation strategies and the latter alternative rely on different sets of assumptions concerning the impact of missing values on identifiability. We discuss these assumptions and assess the methods through an extensive simulation study. This work is motivated by the analysis of a large registry of over 20,000 major trauma patients and an RCT studying the effect of tranexamic acid administration on mortality in major trauma patients admitted to intensive care units. The analysis illustrates how the missing values handling can impact the conclusion about the effect generalized from the RCT to the target population.  相似文献   

9.
In epidemiological and clinical research, investigators are frequently interested in estimating the direct effect of a treatment on an outcome that is not relayed by intermediate variables. In 2009, VanderWeele presented marginal structural models (MSMs) for estimating direct effects based on interventions on the mediator. This paper focuses on direct effects based on principal stratification, i.e. principal stratum direct effects (PSDEs), which are causal effects within latent subgroups of subjects where the mediator is constant, regardless of the exposure status. We propose MSMs for estimating PSDEs. We demonstrate that the PSDE can be estimated readily using MSMs under the monotonicity assumption.  相似文献   

10.
In studies based on electronic health records (EHR), the frequency of covariate monitoring can vary by covariate type, across patients, and over time, which can limit the generalizability of inferences about the effects of adaptive treatment strategies. In addition, monitoring is a health intervention in itself with costs and benefits, and stakeholders may be interested in the effect of monitoring when adopting adaptive treatment strategies. This paper demonstrates how to exploit nonsystematic covariate monitoring in EHR‐based studies to both improve the generalizability of causal inferences and to evaluate the health impact of monitoring when evaluating adaptive treatment strategies. Using a real world, EHR‐based, comparative effectiveness research (CER) study of patients with type II diabetes mellitus, we illustrate how the evaluation of joint dynamic treatment and static monitoring interventions can improve CER evidence and describe two alternate estimation approaches based on inverse probability weighting (IPW). First, we demonstrate the poor performance of the standard estimator of the effects of joint treatment‐monitoring interventions, due to a large decrease in data support and concerns over finite‐sample bias from near‐violations of the positivity assumption (PA) for the monitoring process. Second, we detail an alternate IPW estimator using a no direct effect assumption. We demonstrate that this estimator can improve efficiency but at the potential cost of increase in bias from violations of the PA for the treatment process.  相似文献   

11.
Li E  Wang N  Wang NY 《Biometrics》2007,63(4):1068-1078
Summary .   Joint models are formulated to investigate the association between a primary endpoint and features of multiple longitudinal processes. In particular, the subject-specific random effects in a multivariate linear random-effects model for multiple longitudinal processes are predictors in a generalized linear model for primary endpoints. Li, Zhang, and Davidian (2004, Biometrics 60 , 1–7) proposed an estimation procedure that makes no distributional assumption on the random effects but assumes independent within-subject measurement errors in the longitudinal covariate process. Based on an asymptotic bias analysis, we found that their estimators can be biased when random effects do not fully explain the within-subject correlations among longitudinal covariate measurements. Specifically, the existing procedure is fairly sensitive to the independent measurement error assumption. To overcome this limitation, we propose new estimation procedures that require neither a distributional or covariance structural assumption on covariate random effects nor an independence assumption on within-subject measurement errors. These new procedures are more flexible, readily cover scenarios that have multivariate longitudinal covariate processes, and can be implemented using available software. Through simulations and an analysis of data from a hypertension study, we evaluate and illustrate the numerical performances of the new estimators.  相似文献   

12.
O'Malley AJ  Normand SL 《Biometrics》2005,61(2):325-334
While several new methods that account for noncompliance or missing data in randomized trials have been proposed, the dual effects of noncompliance and nonresponse are rarely dealt with simultaneously. We construct a maximum likelihood estimator (MLE) of the causal effect of treatment assignment for a two-armed randomized trial assuming all-or-none treatment noncompliance and allowing for subsequent nonresponse. The EM algorithm is used for parameter estimation. Our likelihood procedure relies on a latent compliance state covariate that describes the behavior of a subject under all possible treatment assignments and characterizes the missing data mechanism as in Frangakis and Rubin (1999, Biometrika 86, 365-379). Using simulated data, we show that the MLE for normal outcomes compares favorably to the method-of-moments (MOM) and the standard intention-to-treat (ITT) estimators under (1) both normal and non-normal data, and (2) departures from the latent ignorability and compound exclusion restriction assumptions. We illustrate methods using data from a trial to compare the efficacy of two antipsychotics for adults with refractory schizophrenia.  相似文献   

13.
Summary Given a randomized treatment Z, a clinical outcome Y, and a biomarker S measured some fixed time after Z is administered, we may be interested in addressing the surrogate endpoint problem by evaluating whether S can be used to reliably predict the effect of Z on Y. Several recent proposals for the statistical evaluation of surrogate value have been based on the framework of principal stratification. In this article, we consider two principal stratification estimands: joint risks and marginal risks. Joint risks measure causal associations (CAs) of treatment effects on S and Y, providing insight into the surrogate value of the biomarker, but are not statistically identifiable from vaccine trial data. Although marginal risks do not measure CAs of treatment effects, they nevertheless provide guidance for future research, and we describe a data collection scheme and assumptions under which the marginal risks are statistically identifiable. We show how different sets of assumptions affect the identifiability of these estimands; in particular, we depart from previous work by considering the consequences of relaxing the assumption of no individual treatment effects on Y before S is measured. Based on algebraic relationships between joint and marginal risks, we propose a sensitivity analysis approach for assessment of surrogate value, and show that in many cases the surrogate value of a biomarker may be hard to establish, even when the sample size is large.  相似文献   

14.
Vanderweele TJ 《Biometrics》2008,64(2):645-649
Summary .   In a presentation of various methods for assessing the sensitivity of regression results to unmeasured confounding, Lin, Psaty, and Kronmal (1998, Biometrics 54 , 948–963) use a conditional independence assumption to derive algebraic relationships between the true exposure effect and the apparent exposure effect in a reduced model that does not control for the unmeasured confounding variable. However, Hernán and Robins (1999, Biometrics 55 , 1316–1317) have noted that if the measured covariates and the unmeasured confounder both affect the exposure of interest then the principal conditional independence assumption that is used to derive these algebraic relationships cannot hold. One particular result of Lin et al. does not rely on the conditional independence assumption but only on assumptions concerning additivity. It can be shown that this assumption is satisfied for an entire family of distributions even if both the measured covariates and the unmeasured confounder affect the exposure of interest. These considerations clarify the appropriate contexts in which relevant sensitivity analysis techniques can be applied.  相似文献   

15.
Mattei A  Mealli F 《Biometrics》2007,63(2):437-446
In this article we present an extended framework based on the principal stratification approach (Frangakis and Rubin, 2002, Biometrics 58, 21-29), for the analysis of data from randomized experiments which suffer from treatment noncompliance, missing outcomes following treatment noncompliance, and "truncation by death." We are not aware of any previous work that addresses all these complications jointly. This framework is illustrated in the context of a randomized trial of breast self-examination.  相似文献   

16.
We consider methods for causal inference in randomized trials nested within cohorts of trial‐eligible individuals, including those who are not randomized. We show how baseline covariate data from the entire cohort, and treatment and outcome data only from randomized individuals, can be used to identify potential (counterfactual) outcome means and average treatment effects in the target population of all eligible individuals. We review identifiability conditions, propose estimators, and assess the estimators' finite‐sample performance in simulation studies. As an illustration, we apply the estimators in a trial nested within a cohort of trial‐eligible individuals to compare coronary artery bypass grafting surgery plus medical therapy vs. medical therapy alone for chronic coronary artery disease.  相似文献   

17.
We present methods for causally interpretable meta-analyses that combine information from multiple randomized trials to draw causal inferences for a target population of substantive interest. We consider identifiability conditions, derive implications of the conditions for the law of the observed data, and obtain identification results for transporting causal inferences from a collection of independent randomized trials to a new target population in which experimental data may not be available. We propose an estimator for the potential outcome mean in the target population under each treatment studied in the trials. The estimator uses covariate, treatment, and outcome data from the collection of trials, but only covariate data from the target population sample. We show that it is doubly robust in the sense that it is consistent and asymptotically normal when at least one of the models it relies on is correctly specified. We study the finite sample properties of the estimator in simulation studies and demonstrate its implementation using data from a multicenter randomized trial.  相似文献   

18.
Clinical studies are often concerned with assessing whether different raters/methods produce similar values for measuring a quantitative variable. Use of the concordance correlation coefficient as a measure of reproducibility has gained popularity in practice since its introduction by Lin (1989, Biometrics 45, 255-268). Lin's method is applicable for studies evaluating two raters/two methods without replications. Chinchilli et al. (1996, Biometrics 52, 341-353) extended Lin's approach to repeated measures designs by using a weighted concordance correlation coefficient. However, the existing methods cannot easily accommodate covariate adjustment, especially when one needs to model agreement. In this article, we propose a generalized estimating equations (GEE) approach to model the concordance correlation coefficient via three sets of estimating equations. The proposed approach is flexible in that (1) it can accommodate more than two correlated readings and test for the equality of dependent concordant correlation estimates; (2) it can incorporate covariates predictive of the marginal distribution; (3) it can be used to identify covariates predictive of concordance correlation; and (4) it requires minimal distribution assumptions. A simulation study is conducted to evaluate the asymptotic properties of the proposed approach. The method is illustrated with data from two biomedical studies.  相似文献   

19.
Patterns of treatment effects in subsets of patients in clinical trials   总被引:2,自引:0,他引:2  
We discuss the practice of examining patterns of treatment effects across overlapping patient subpopulations. In particular, we focus on the case in which patient subgroups are defined to contain patients having increasingly larger (or smaller) values of one particular covariate of interest, with the intent of exploring the possible interaction between treatment effect and that covariate. We formalize these subgroup approaches (STEPP: subpopulation treatment effect pattern plots) and implement them when treatment effect is defined as the difference in survival at a fixed time point between two treatment arms. The joint asymptotic distribution of the treatment effect estimates is derived, and used to construct simultaneous confidence bands around the estimates and to test the null hypothesis of no interaction. These methods are illustrated using data from a clinical trial conducted by the International Breast Cancer Study Group, which demonstrates the critical role of estrogen receptor content of the primary breast cancer for selecting appropriate adjuvant therapy. The considerations are also relevant for general subset analysis, since information from the same patients is typically used in the estimation of treatment effects within two or more subgroups of patients defined with respect to different covariates.  相似文献   

20.
Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号