首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Chen H  Geng Z  Zhou XH 《Biometrics》2009,65(3):675-682
Summary .  In this article, we first study parameter identifiability in randomized clinical trials with noncompliance and missing outcomes. We show that under certain conditions the parameters of interest are identifiable even under different types of completely nonignorable missing data: that is, the missing mechanism depends on the outcome. We then derive their maximum likelihood and moment estimators and evaluate their finite-sample properties in simulation studies in terms of bias, efficiency, and robustness. Our sensitivity analysis shows that the assumed nonignorable missing-data model has an important impact on the estimated complier average causal effect (CACE) parameter. Our new method provides some new and useful alternative nonignorable missing-data models over the existing latent ignorable model, which guarantees parameter identifiability, for estimating the CACE in a randomized clinical trial with noncompliance and missing data.  相似文献   

2.
Doubly robust estimation in missing data and causal inference models   总被引:3,自引:0,他引:3  
Bang H  Robins JM 《Biometrics》2005,61(4):962-973
The goal of this article is to construct doubly robust (DR) estimators in ignorable missing data and causal inference models. In a missing data model, an estimator is DR if it remains consistent when either (but not necessarily both) a model for the missingness mechanism or a model for the distribution of the complete data is correctly specified. Because with observational data one can never be sure that either a missingness model or a complete data model is correct, perhaps the best that can be hoped for is to find a DR estimator. DR estimators, in contrast to standard likelihood-based or (nonaugmented) inverse probability-weighted estimators, give the analyst two chances, instead of only one, to make a valid inference. In a causal inference model, an estimator is DR if it remains consistent when either a model for the treatment assignment mechanism or a model for the distribution of the counterfactual data is correctly specified. Because with observational data one can never be sure that a model for the treatment assignment mechanism or a model for the counterfactual data is correct, inference based on DR estimators should improve upon previous approaches. Indeed, we present the results of simulation studies which demonstrate that the finite sample performance of DR estimators is as impressive as theory would predict. The proposed method is applied to a cardiovascular clinical trial.  相似文献   

3.
Albert PS 《Biometrics》2000,56(2):602-608
Binary longitudinal data are often collected in clinical trials when interest is on assessing the effect of a treatment over time. Our application is a recent study of opiate addiction that examined the effect of a new treatment on repeated urine tests to assess opiate use over an extended follow-up. Drug addiction is episodic, and a new treatment may affect various features of the opiate-use process such as the proportion of positive urine tests over follow-up and the time to the first occurrence of a positive test. Complications in this trial were the large amounts of dropout and intermittent missing data and the large number of observations on each subject. We develop a transitional model for longitudinal binary data subject to nonignorable missing data and propose an EM algorithm for parameter estimation. We use the transitional model to derive summary measures of the opiate-use process that can be compared across treatment groups to assess treatment effect. Through analyses and simulations, we show the importance of properly accounting for the missing data mechanism when assessing the treatment effect in our example.  相似文献   

4.
Yan W  Hu Y  Geng Z 《Biometrics》2012,68(1):121-128
We discuss identifiability and estimation of causal effects of a treatment in subgroups defined by a covariate that is sometimes missing due to death, which is different from a problem with outcomes censored by death. Frangakis et al. (2007, Biometrics 63, 641-662) proposed an approach for estimating the causal effects under a strong monotonicity (SM) assumption. In this article, we focus on identifiability of the joint distribution of the covariate, treatment and potential outcomes, show sufficient conditions for identifiability, and relax the SM assumption to monotonicity (M) and no-interaction (NI) assumptions. We derive expectation-maximization algorithms for finding the maximum likelihood estimates of parameters of the joint distribution under different assumptions. Further we remove the M and NI assumptions, and prove that signs of the causal effects of a treatment in the subgroups are identifiable, which means that their bounds do not cover zero. We perform simulations and a sensitivity analysis to evaluate our approaches. Finally, we apply the approaches to the National Study on the Costs and Outcomes of Trauma Centers data, which are also analyzed by Frangakis et al. (2007) and Xie and Murphy (2007, Biometrics 63, 655-658).  相似文献   

5.
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

6.
We consider studies of cohorts of individuals after a critical event, such as an injury, with the following characteristics. First, the studies are designed to measure "input" variables, which describe the period before the critical event, and to characterize the distribution of the input variables in the cohort. Second, the studies are designed to measure "output" variables, primarily mortality after the critical event, and to characterize the predictive (conditional) distribution of mortality given the input variables in the cohort. Such studies often possess the complication that the input data are missing for those who die shortly after the critical event because the data collection takes place after the event. Standard methods of dealing with the missing inputs, such as imputation or weighting methods based on an assumption of ignorable missingness, are known to be generally invalid when the missingness of inputs is nonignorable, that is, when the distribution of the inputs is different between those who die and those who live. To address this issue, we propose a novel design that obtains and uses information on an additional key variable-a treatment or externally controlled variable, which if set at its "effective" level, could have prevented the death of those who died. We show that the new design can be used to draw valid inferences for the marginal distribution of inputs in the entire cohort, and for the conditional distribution of mortality given the inputs, also in the entire cohort, even under nonignorable missingness. The crucial framework that we use is principal stratification based on the potential outcomes, here mortality under both levels of treatment. We also show using illustrative preliminary injury data that our approach can reveal results that are more reasonable than the results of standard methods, in relatively dramatic ways. Thus, our approach suggests that the routine collection of data on variables that could be used as possible treatments in such studies of inputs and mortality should become common.  相似文献   

7.
In many studies comparing a new 'target treatment' with a control target treatment, the received treatment does not always agree with assigned treatment-that is, the compliance is imperfect. An obvious example arises when ethical or practical constraints prevent even the randomized assignment of receipt of the new target treatment but allow the randomized assignment of the encouragement to receive this treatment. In fact, many randomized experiments where compliance is not enforced by the experimenter (e.g. with non-blinded assignment) may be more accurately thought of as randomized encouragement designs. Moreover, often the assignment of encouragement is at the level of clusters (e.g. doctors) where the compliance with the assignment varies across the units (e.g. patients) within clusters. We refer to such studies as 'clustered encouragement designs' (CEDs) and they arise relatively frequently (e.g. Sommer and Zeger, 1991; McDonald et al., 1992; Dexter et al., 1998) Here, we propose Bayesian methodology for causal inference for the effect of the new target treatment versus the control target treatment in the randomized CED with all-or-none compliance at the unit level, which generalizes the approach of Hirano et al. (2000) in important and surprisingly subtle ways, to account for the clustering, which is necessary for statistical validity. We illustrate our methods using data from a recent study exploring the role of physician consulting in increasing patients' completion of Advance Directive forms.  相似文献   

8.
Summary .  We focus on estimation of the causal effect of treatment on the functional status of individuals at a fixed point in time t * after they have experienced a catastrophic event, from observational data with the following features: (i) treatment is imposed shortly after the event and is nonrandomized, (ii) individuals who survive to t * are scheduled to be interviewed, (iii) there is interview nonresponse, (iv) individuals who die prior to t * are missing information on preevent confounders, and (v) medical records are abstracted on all individuals to obtain information on postevent, pretreatment confounding factors. To address the issue of survivor bias, we seek to estimate the survivor average causal effect (SACE), the effect of treatment on functional status among the cohort of individuals who would survive to t * regardless of whether or not assigned to treatment. To estimate this effect from observational data, we need to impose untestable assumptions, which depend on the collection of all confounding factors. Because preevent information is missing on those who die prior to t *, it is unlikely that these data are missing at random. We introduce a sensitivity analysis methodology to evaluate the robustness of SACE inferences to deviations from the missing at random assumption. We apply our methodology to the evaluation of the effect of trauma center care on vitality outcomes using data from the National Study on Costs and Outcomes of Trauma Care.  相似文献   

9.
10.
Little RJ  Long Q  Lin X 《Biometrics》2009,65(2):640-649
Summary .  We consider the analysis of clinical trials that involve randomization to an active treatment ( T  = 1) or a control treatment ( T  = 0), when the active treatment is subject to all-or-nothing compliance. We compare three approaches to estimating treatment efficacy in this situation: as-treated analysis, per-protocol analysis, and instrumental variable (IV) estimation, where the treatment effect is estimated using the randomization indicator as an IV. Both model- and method-of-moment based IV estimators are considered. The assumptions underlying these estimators are assessed, standard errors and mean squared errors of the estimates are compared, and design implications of the three methods are examined. Extensions of the methods to include observed covariates are then discussed, emphasizing the role of compliance propensity methods and the contrasting role of covariates in these extensions. Methods are illustrated on data from the Women Take Pride study, an assessment of behavioral treatments for women with heart disease.  相似文献   

11.
12.
Maximum likelihood methods for cure rate models with missing covariates   总被引:1,自引:0,他引:1  
Chen MH  Ibrahim JG 《Biometrics》2001,57(1):43-52
We propose maximum likelihood methods for parameter estimation for a novel class of semiparametric survival models with a cure fraction, in which the covariates are allowed to be missing. We allow the covariates to be either categorical or continuous and specify a parametric distribution for the covariates that is written as a sequence of one-dimensional conditional distributions. We propose a novel EM algorithm for maximum likelihood estimation and derive standard errors by using Louis's formula (Louis, 1982, Journal of the Royal Statistical Society, Series B 44, 226-233). Computational techniques using the Monte Carlo EM algorithm are discussed and implemented. A real data set involving a melanoma cancer clinical trial is examined in detail to demonstrate the methodology.  相似文献   

13.
In many longitudinal studies, the individual characteristics associated with the repeated measures may be possible covariates of the time to an event of interest, and thus, it is desirable to model the time-to-event process and the longitudinal process jointly. Statistical analyses may be further complicated in such studies with missing data such as informative dropouts. This article considers a nonlinear mixed-effects model for the longitudinal process and the Cox proportional hazards model for the time-to-event process. We provide a method for simultaneous likelihood inference on the 2 models and allow for nonignorable data missing. The approach is illustrated with a recent AIDS study by jointly modeling HIV viral dynamics and time to viral rebound.  相似文献   

14.
15.
Cho M  Schenker N 《Biometrics》1999,55(3):826-833
Data obtained from studies in the health sciences often have incompletely observed covariates as well as censored outcomes. In this paper, we present methods for fitting the log-F accelerated failure time model with incomplete continuous and/or categorical time-independent covariates using the Gibbs sampler. A general location model that allows different covariance structures across cells is specified for the covariates, and ignorable missingness of the covariates is assumed. Techniques that accommodate standard assumptions of ignorable censoring as well as certain types of nonignorable censoring are developed. We compare our approach to traditional complete-case analysis in an application to data obtained from a study of melanoma. The comparison indicates that substantial gains in efficiency are possible with our approach.  相似文献   

16.
Sternberg MR  Satten GA 《Biometrics》1999,55(2):514-522
Chain-of-events data are longitudinal observations on a succession of events that can only occur in a prescribed order. One goal in an analysis of this type of data is to determine the distribution of times between the successive events. This is difficult when individuals are observed periodically rather than continuously because the event times are then interval censored. Chain-of-events data may also be subject to truncation when individuals can only be observed if a certain event in the chain (e.g., the final event) has occurred. We provide a nonparametric approach to estimate the distributions of times between successive events in discrete time for data such as these under the semi-Markov assumption that the times between events are independent. This method uses a self-consistency algorithm that extends Turnbull's algorithm (1976, Journal of the Royal Statistical Society, Series B 38, 290-295). The quantities required to carry out the algorithm can be calculated recursively for improved computational efficiency. Two examples using data from studies involving HIV disease are used to illustrate our methods.  相似文献   

17.
In this paper, we consider mean comparisons for paired samples in which a certain portion of the observations are missing. This type of data commonly arises in medical researches where the outcomes are assessed at two time points after the application of treatments. New methods for statistical inference are proposed by making finiteness correction based on asymptotic expansions of some intuitive statistics. The comparison methods naturally extend to the two‐group case after some suitable manipulations. Simulation study is carried out to demonstrate the numerical accuracy of the proposed methods. Data from a smoking‐cessation trial are used to illustrate the application of the methods.  相似文献   

18.
Roy J  Lin X 《Biometrics》2005,61(3):837-846
We consider estimation in generalized linear mixed models (GLMM) for longitudinal data with informative dropouts. At the time a unit drops out, time-varying covariates are often unobserved in addition to the missing outcome. However, existing informative dropout models typically require covariates to be completely observed. This assumption is not realistic in the presence of time-varying covariates. In this article, we first study the asymptotic bias that would result from applying existing methods, where missing time-varying covariates are handled using naive approaches, which include: (1) using only baseline values; (2) carrying forward the last observation; and (3) assuming the missing data are ignorable. Our asymptotic bias analysis shows that these naive approaches yield inconsistent estimators of model parameters. We next propose a selection/transition model that allows covariates to be missing in addition to the outcome variable at the time of dropout. The EM algorithm is used for inference in the proposed model. Data from a longitudinal study of human immunodeficiency virus (HIV)-infected women are used to illustrate the methodology.  相似文献   

19.
Ibrahim JG  Chen MH  Lipsitz SR 《Biometrics》1999,55(2):591-596
We propose a method for estimating parameters for general parametric regression models with an arbitrary number of missing covariates. We allow any pattern of missing data and assume that the missing data mechanism is ignorable throughout. When the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM algorithm by the method of weights proposed in Ibrahim (1990, Journal of the American Statistical Association 85, 765-769). We extend this method to continuous or mixed categorical and continuous covariates, and for arbitrary parametric regression models, by adapting a Monte Carlo version of the EM algorithm as discussed by Wei and Tanner (1990, Journal of the American Statistical Association 85, 699-704). In addition, we discuss the Gibbs sampler for sampling from the conditional distribution of the missing covariates given the observed data and show that the appropriate complete conditionals are log-concave. The log-concavity property of the conditional distributions will facilitate a straightforward implementation of the Gibbs sampler via the adaptive rejection algorithm of Gilks and Wild (1992, Applied Statistics 41, 337-348). We assume the model for the response given the covariates is an arbitrary parametric regression model, such as a generalized linear model, a parametric survival model, or a nonlinear model. We model the marginal distribution of the covariates as a product of one-dimensional conditional distributions. This allows us a great deal of flexibility in modeling the distribution of the covariates and reduces the number of nuisance parameters that are introduced in the E-step. We present examples involving both simulated and real data.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号