首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
In randomized trials with noncompliance, causal effects cannot be identified without strong assumptions. Therefore, several authors have considered bounds on the causal effects. Applying an idea of VanderWeele ( 2008 ), Chiba ( 2009 ) gave bounds on the average causal effects in randomized trials with noncompliance using the information on the randomized assignment, the treatment received and the outcome under monotonicity assumptions about covariates. But he did not consider any observed covariates. If there are some observed covariates such as age, gender, and race in a trial, we propose new bounds using the observed covariate information under some monotonicity assumptions similar to those of VanderWeele and Chiba. And we compare the three bounds in a real example.  相似文献   

2.
In a clinical trial, statistical reports are typically concerned about the mean difference in two groups. Now there is increasing interest in the heterogeneity of the treatment effect, which has important implications in treatment evaluation and selection. The treatment harm rate (THR), which is defined by the proportion of people who has a worse outcome on the treatment compared to the control, was used to characterize the heterogeneity. Since THR involves the joint distribution of the two potential outcomes, it cannot be identified without further assumptions even in the randomized trials. We can only derive the simple bounds with the observed data. But the simple bounds are usually too wide. In this paper, we use a secondary outcome that satisfies the monotonicity assumption to tighten the bounds. It is shown that the bounds we derive cannot be wider than the simple bounds. We also construct some simulation studies to assess the performance of our bounds in finite sample. The results show that a secondary outcome, which is more closely related to the primary outcome, can lead to narrower bounds. Finally, we illustrate the application of the proposed bounds in a randomized clinical trial of determining whether the intensive glycemia could reduce the risk of development or progression of diabetic retinopathy.  相似文献   

3.
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

4.
Evaluation of the causal effect of a baseline exposure on a morbidity outcome at a fixed time point is often complicated when study participants die before morbidity outcomes are measured. In this setting, the causal effect is only well defined for the principal stratum of subjects who would live regardless of the exposure. Motivated by gerontologic researchers interested in understanding the causal effect of vision loss on emotional distress in a population with a high mortality rate, we investigate the effect among those who would live both with and without vision loss. Since this subpopulation is not readily identifiable from the data and vision loss is not randomized, we introduce a set of scientifically driven assumptions to identify the causal effect. Since these assumptions are not empirically verifiable, we embed our methodology within a sensitivity analysis framework. We apply our method using the first three rounds of survey data from the Salisbury Eye Evaluation, a population-based cohort study of older adults. We also present a simulation study that validates our method.  相似文献   

5.
We focus on the problem of generalizing a causal effect estimated on a randomized controlled trial (RCT) to a target population described by a set of covariates from observational data. Available methods such as inverse propensity sampling weighting are not designed to handle missing values, which are however common in both data sources. In addition to coupling the assumptions for causal effect identifiability and for the missing values mechanism and to defining appropriate estimation strategies, one difficulty is to consider the specific structure of the data with two sources and treatment and outcome only available in the RCT. We propose three multiple imputation strategies to handle missing values when generalizing treatment effects, each handling the multisource structure of the problem differently (separate imputation, joint imputation with fixed effect, joint imputation ignoring source information). As an alternative to multiple imputation, we also propose a direct estimation approach that treats incomplete covariates as semidiscrete variables. The multiple imputation strategies and the latter alternative rely on different sets of assumptions concerning the impact of missing values on identifiability. We discuss these assumptions and assess the methods through an extensive simulation study. This work is motivated by the analysis of a large registry of over 20,000 major trauma patients and an RCT studying the effect of tranexamic acid administration on mortality in major trauma patients admitted to intensive care units. The analysis illustrates how the missing values handling can impact the conclusion about the effect generalized from the RCT to the target population.  相似文献   

6.
In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches.  相似文献   

7.
Valid surrogate endpoints S can be used as a substitute for a true outcome of interest T to measure treatment efficacy in a clinical trial. We propose a causal inference approach to validate a surrogate by incorporating longitudinal measurements of the true outcomes using a mixed modeling approach, and we define models and quantities for validation that may vary across the study period using principal surrogacy criteria. We consider a surrogate-dependent treatment efficacy curve that allows us to validate the surrogate at different time points. We extend these methods to accommodate a delayed-start treatment design where all patients eventually receive the treatment. Not all parameters are identified in the general setting. We apply a Bayesian approach for estimation and inference, utilizing more informative prior distributions for selected parameters. We consider the sensitivity of these prior assumptions as well as assumptions of independence among certain counterfactual quantities conditional on pretreatment covariates to improve identifiability. We examine the frequentist properties (bias of point and variance estimates, credible interval coverage) of a Bayesian imputation method. Our work is motivated by a clinical trial of a gene therapy where the functional outcomes are measured repeatedly throughout the trial.  相似文献   

8.
This article is concerned with drawing inference about aspects of the population distribution of ordinal outcome data measured on a cohort of individuals on two occasions, where some subjects are missing their second measurement. We present two complementary approaches for constructing bounds under assumptions on the missing data mechanism considered plausible by scientific experts. We develop our methodology within the context of a randomized trial of the "Good Behavior Game," an intervention designed to reduce aggressive misbehavior among children.  相似文献   

9.
It is widely known that Instrumental Variable (IV) estimation allows the researcher to estimate causal effects between an exposure and an outcome even in face of serious uncontrolled confounding. The key requirement for IV estimation is the existence of a variable, the instrument, which only affects the outcome through its effects on the exposure and that the instrument–outcome relationship is unconfounded. Countless papers have employed such techniques and carefully addressed the validity of the IV assumption just mentioned. However, less appreciated is that fact that the IV estimation also depends on a number of distributional assumptions in particular linearities. In this paper, we propose a novel bounding procedure which can bound the true causal effect relying only on the key IV assumption and not on any distributional assumptions. For a purely binary case (instrument, exposure, and outcome all binary), such boundaries have been proposed by Balke and Pearl in 1997. We extend such boundaries to non-binary settings. In addition, our procedure offers a tuning parameter such that one can go from the traditional IV analysis, which provides a point estimate, to a completely unrestricted bound and anything in between. Subject matter knowledge can be used when setting the tuning parameter. To the best of our knowledge, no such methods exist elsewhere. The method is illustrated using a pivotal study which introduced IV estimation to epidemiologists. Here, we demonstrate that the conclusion of this paper indeed hinges on these additional distributional assumptions. R-code is provided in the Supporting Information.  相似文献   

10.
This paper addresses treatment effect heterogeneity (also referred to, more compactly, as 'treatment heterogeneity') in the context of a controlled clinical trial with binary endpoints. Treatment heterogeneity, variation in the true (causal) individual treatment effects, is explored using the concept of the potential outcome. This framework supposes the existance of latent responses for each subject corresponding to each possible treatment. In the context of a binary endpoint, treatment heterogeniety may be represented by the parameter, pi2, the probability that an individual would have a failure on the experimental treatment, if received, and would have a success on control, if received. Previous research derived bounds for pi2 based on matched pairs data. The present research extends this method to the blocked data context. Estimates (and their variances) and confidence intervals for the bounds are derived. We apply the new method to data from a renal disease clinical trial. In this example, bounds based on the blocked data are narrower than the corresponding bounds based only on the marginal success proportions. Some remaining challenges (including the possibility of further reducing bound widths) are discussed.  相似文献   

11.
In many experiments, researchers would like to compare between treatments and outcome that only exists in a subset of participants selected after randomization. For example, in preventive HIV vaccine efficacy trials it is of interest to determine whether randomization to vaccine causes lower HIV viral load, a quantity that only exists in participants who acquire HIV. To make a causal comparison and account for potential selection bias we propose a sensitivity analysis following the principal stratification framework set forth by Frangakis and Rubin (2002, Biometrics58, 21-29). Our goal is to assess the average causal effect of treatment assignment on viral load at a given baseline covariate level in the always infected principal stratum (those who would have been infected whether they had been assigned to vaccine or placebo). We assume stable unit treatment values (SUTVA), randomization, and that subjects randomized to the vaccine arm who became infected would also have become infected if randomized to the placebo arm (monotonicity). It is not known which of those subjects infected in the placebo arm are in the always infected principal stratum, but this can be modeled conditional on covariates, the observed viral load, and a specified sensitivity parameter. Under parametric regression models for viral load, we obtain maximum likelihood estimates of the average causal effect conditional on covariates and the sensitivity parameter. We apply our methods to the world's first phase III HIV vaccine trial.  相似文献   

12.
We consider methods for causal inference in randomized trials nested within cohorts of trial‐eligible individuals, including those who are not randomized. We show how baseline covariate data from the entire cohort, and treatment and outcome data only from randomized individuals, can be used to identify potential (counterfactual) outcome means and average treatment effects in the target population of all eligible individuals. We review identifiability conditions, propose estimators, and assess the estimators' finite‐sample performance in simulation studies. As an illustration, we apply the estimators in a trial nested within a cohort of trial‐eligible individuals to compare coronary artery bypass grafting surgery plus medical therapy vs. medical therapy alone for chronic coronary artery disease.  相似文献   

13.
14.
Taylor JM  Wang Y  Thiébaut R 《Biometrics》2005,61(4):1102-1111
In a randomized clinical trial, a statistic that measures the proportion of treatment effect on the primary clinical outcome that is explained by the treatment effect on a surrogate outcome is a useful concept. We investigate whether a statistic proposed to estimate this proportion can be given a causal interpretation as defined by models of counterfactual variables. For the situation of binary surrogate and outcome variables, two counterfactual models are considered, both of which include the concept of the proportion of the treatment effect, which acts through the surrogate. In general, the statistic does not equal either of the two proportions from the counterfactual models, and can be substantially different. Conditions are given for which the statistic does equal the counterfactual model proportions. A randomized clinical trial with potential surrogate endpoints is undertaken in a scientific context; this context will naturally place constraints on the parameters of the counterfactual model. We conducted a simulation experiment to investigate what impact these constraints had on the relationship between the proportion explained (PE) statistic and the counterfactual model proportions. We found that observable constraints had very little impact on the agreement between the statistic and the counterfactual model proportions, whereas unobservable constraints could lead to more agreement.  相似文献   

15.
The fraction who benefit from treatment is the proportion of patients whose potential outcome under treatment is better than that under control. Inference on this parameter is challenging since it is only partially identifiable, even in our context of a randomized trial. We propose a new method for constructing a confidence interval for the fraction, when the outcome is ordinal or binary. Our confidence interval procedure is pointwise consistent. It does not require any assumptions about the joint distribution of the potential outcomes, although it has the flexibility to incorporate various user‐defined assumptions. Our method is based on a stochastic optimization technique involving a second‐order, asymptotic approximation that, to the best of our knowledge, has not been applied to biomedical studies. This approximation leads to statistics that are solutions to quadratic programs, which can be computed efficiently using optimization tools. In simulation, our method attains the nominal coverage probability or higher, and can have narrower average width than competitor methods. We apply it to a trial of a new intervention for stroke.  相似文献   

16.
Li Z  Murphy SA 《Biometrika》2011,98(3):503-518
Two-stage randomized trials are growing in importance in developing adaptive treatment strategies, i.e. treatment policies or dynamic treatment regimes. Usually, the first stage involves randomization to one of the several initial treatments. The second stage of treatment begins when an early nonresponse criterion or response criterion is met. In the second-stage, nonresponding subjects are re-randomized among second-stage treatments. Sample size calculations for planning these two-stage randomized trials with failure time outcomes are challenging because the variances of common test statistics depend in a complex manner on the joint distribution of time to the early nonresponse criterion or response criterion and the primary failure time outcome. We produce simple, albeit conservative, sample size formulae by using upper bounds on the variances. The resulting formulae only require the working assumptions needed to size a standard single-stage randomized trial and, in common settings, are only mildly conservative. These sample size formulae are based on either a weighted Kaplan-Meier estimator of survival probabilities at a fixed time-point or a weighted version of the log-rank test.  相似文献   

17.
Adjusting for intermediate variables is a common analytic strategy for estimating a direct effect. Even if the total effect is unconfounded, the direct effect is not identified when unmeasured variables affect the intermediate and outcome variables. Therefore, some researchers presented bounds on the controlled direct effects via linear programming. They applied a monotonic assumption about treatment and intermediate variables and a no-interaction assumption to derive narrower bounds. Here, we improve their bounds without using linear programming and hence derive a bound under the monotonic assumption about an intermediate variable only. To improve the bounds, we further introduce the monotonic assumption about confounders. While previous studies assumed that an outcome is a binary variable, we do not make that assumption. The proposed bounds are illustrated using two examples from randomized trials.  相似文献   

18.
Shepherd BE  Gilbert PB  Dupont CT 《Biometrics》2011,67(3):1100-1110
In randomized studies researchers may be interested in the effect of treatment assignment on a time-to-event outcome that only exists in a subset selected after randomization. For example, in preventative HIV vaccine trials, it is of interest to determine whether randomization to vaccine affects the time from infection diagnosis until initiation of antiretroviral therapy. Earlier work assessed the effect of treatment on outcome among the principal stratum of individuals who would have been selected regardless of treatment assignment. These studies assumed monotonicity, that one of the principal strata was empty (e.g., every person infected in the vaccine arm would have been infected if randomized to placebo). Here, we present a sensitivity analysis approach for relaxing monotonicity with a time-to-event outcome. We also consider scenarios where selection is unknown for some subjects because of noninformative censoring (e.g., infection status k years after randomization is unknown for some because of staggered study entry). We illustrate our method using data from an HIV vaccine trial.  相似文献   

19.
Cheng J 《Biometrics》2009,65(1):96-103
Summary .  This article considers the analysis of two-arm randomized trials with noncompliance, which have a multinomial outcome. We first define the causal effect in these trials as some function of outcome distributions of compliers with and without treatment (e.g., the complier average causal effect, the measure of stochastic superiority of treatment over control for compliers), then estimate the causal effect with the likelihood method. Next, based on the likelihood-ratio (LR) statistic, we test those functions of or the equality of the outcome distributions of compliers with and without treatment. Although the corresponding LR statistic follows a chi-squared  (χ2)  distribution asymptotically when the true values of parameters are in the interior of the parameter space under the null, its asymptotic distribution is not  χ2  when the true values of parameters are on the boundary of the parameter space under the null. Therefore, we propose a bootstrap/double bootstrap version of a LR test for the causal effect in these trials. The methods are illustrated by an analysis of data from a randomized trial of an encouragement intervention to improve adherence to prescribed depression treatments among depressed elderly patients in primary care practices.  相似文献   

20.
We present methods for causally interpretable meta-analyses that combine information from multiple randomized trials to draw causal inferences for a target population of substantive interest. We consider identifiability conditions, derive implications of the conditions for the law of the observed data, and obtain identification results for transporting causal inferences from a collection of independent randomized trials to a new target population in which experimental data may not be available. We propose an estimator for the potential outcome mean in the target population under each treatment studied in the trials. The estimator uses covariate, treatment, and outcome data from the collection of trials, but only covariate data from the target population sample. We show that it is doubly robust in the sense that it is consistent and asymptotically normal when at least one of the models it relies on is correctly specified. We study the finite sample properties of the estimator in simulation studies and demonstrate its implementation using data from a multicenter randomized trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号