首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Issues of post-randomization selection bias and truncation-by-death can arise in randomized clinical trials; for example, in a cancer prevention trial, an outcome such as cancer severity is undefined for individuals who do not develop cancer. Restricting analysis to a subpopulation selected after randomization can give rise to biased outcome comparisons. One approach to deal with such issues is to consider the principal strata effect (PSE, or equally, the survivor average causal effect). PSE is defined as the effect of treatment on the outcome among the subpopulation that would have been selected under either treatment arm. Unfortunately, the PSE cannot generally be estimated without the identifying assumptions; however, the bounds can be derived using a deterministic causal model. In this paper, we propose a number of assumptions for deriving the bounds with narrow width. The assumptions and bounds, which differ from those introduced by Zhang and Rubin (2003), are illustrated using data from a randomized prostate cancer prevention trial.  相似文献   

2.
Adjusting for intermediate variables is a common analytic strategy for estimating a direct effect. Even if the total effect is unconfounded, the direct effect is not identified when unmeasured variables affect the intermediate and outcome variables. Therefore, some researchers presented bounds on the controlled direct effects via linear programming. They applied a monotonic assumption about treatment and intermediate variables and a no-interaction assumption to derive narrower bounds. Here, we improve their bounds without using linear programming and hence derive a bound under the monotonic assumption about an intermediate variable only. To improve the bounds, we further introduce the monotonic assumption about confounders. While previous studies assumed that an outcome is a binary variable, we do not make that assumption. The proposed bounds are illustrated using two examples from randomized trials.  相似文献   

3.
“Covariate adjustment” in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates”). The baseline variables could include, for example, age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We focus on the analysis of covariance (ANCOVA) estimator, which involves fitting a linear model for the outcome given the treatment arm and baseline variables, and trials that use simple randomization with equal probability of assignment to treatment and control. We prove the following new (to the best of our knowledge) robustness property of ANCOVA to arbitrary model misspecification: Not only is the ANCOVA point estimate consistent (as proved by Yang and Tsiatis, 2001) but so is its standard error. This implies that confidence intervals and hypothesis tests conducted as if the linear model were correct are still asymptotically valid even when the linear model is arbitrarily misspecified, for example, when the baseline variables are nonlinearly related to the outcome or there is treatment effect heterogeneity. We also give a simple, robust formula for the variance reduction (equivalently, sample size reduction) from using ANCOVA. By reanalyzing completed randomized trials for mild cognitive impairment, schizophrenia, and depression, we demonstrate how ANCOVA can achieve variance reductions of 4 to 32%.  相似文献   

4.
In randomized trials with noncompliance, causal effects cannot be identified without strong assumptions. Therefore, several authors have considered bounds on the causal effects. Applying an idea of VanderWeele ( 2008 ), Chiba ( 2009 ) gave bounds on the average causal effects in randomized trials with noncompliance using the information on the randomized assignment, the treatment received and the outcome under monotonicity assumptions about covariates. But he did not consider any observed covariates. If there are some observed covariates such as age, gender, and race in a trial, we propose new bounds using the observed covariate information under some monotonicity assumptions similar to those of VanderWeele and Chiba. And we compare the three bounds in a real example.  相似文献   

5.
Shanshan Luo  Wei Li  Yangbo He 《Biometrics》2023,79(1):502-513
It is challenging to evaluate causal effects when the outcomes of interest suffer from truncation-by-death in many clinical studies; that is, outcomes cannot be observed if patients die before the time of measurement. To address this problem, it is common to consider average treatment effects by principal stratification, for which, the identifiability results and estimation methods with a binary treatment have been established in previous literature. However, in multiarm studies with more than two treatment options, estimation of causal effects becomes more complicated and requires additional techniques. In this article, we consider identification, estimation, and bounds of causal effects with multivalued ordinal treatments and the outcomes subject to truncation-by-death. We define causal parameters of interest in this setting and show that they are identifiable either using some auxiliary variable or based on linear model assumption. We then propose a semiparametric method for estimating the causal parameters and derive their asymptotic results. When the identification conditions are invalid, we derive sharp bounds of the causal effects by use of covariates adjustment. Simulation studies show good performance of the proposed estimator. We use the estimator to analyze the effects of a four-level chronic toxin on fetal developmental outcomes such as birth weight in rats and mice, with data from a developmental toxicity trial conducted by the National Toxicology Program. Data analyses demonstrate that a high dose of the toxin significantly reduces the weights of pups.  相似文献   

6.
This paper addresses treatment effect heterogeneity (also referred to, more compactly, as 'treatment heterogeneity') in the context of a controlled clinical trial with binary endpoints. Treatment heterogeneity, variation in the true (causal) individual treatment effects, is explored using the concept of the potential outcome. This framework supposes the existance of latent responses for each subject corresponding to each possible treatment. In the context of a binary endpoint, treatment heterogeniety may be represented by the parameter, pi2, the probability that an individual would have a failure on the experimental treatment, if received, and would have a success on control, if received. Previous research derived bounds for pi2 based on matched pairs data. The present research extends this method to the blocked data context. Estimates (and their variances) and confidence intervals for the bounds are derived. We apply the new method to data from a renal disease clinical trial. In this example, bounds based on the blocked data are narrower than the corresponding bounds based only on the marginal success proportions. Some remaining challenges (including the possibility of further reducing bound widths) are discussed.  相似文献   

7.
When the individual outcomes within a composite outcome appear to have different treatment effects, either in magnitude or direction, researchers may question the validity or appropriateness of using this composite outcome as a basis for measuring overall treatment effect in a randomized controlled trial. The question remains as to how to distinguish random variation in estimated treatment effects from important heterogeneity within a composite outcome. This paper suggests there may be some utility in directly testing the assumption of homogeneity of treatment effect across the individual outcomes within a composite outcome. We describe a treatment heterogeneity test for composite outcomes based on a class of models used for the analysis of correlated data arising from the measurement of multiple outcomes for the same individuals. Such a test may be useful in planning a trial with a primary composite outcome and at trial end with final analysis and presentation. We demonstrate how to determine the statistical power to detect composite outcome treatment heterogeneity using the POISE Trial data. Then we describe how this test may be incorporated into a presentation of trial results with composite outcomes. We conclude that it may be informative for trialists to assess the consistency of treatment effects across the individual outcomes within a composite outcome using a formalized methodology and the suggested test represents one option.  相似文献   

8.
In clinical trials with patients in a critical state, death may preclude measurement of a quantitative endpoint of interest, and even early measurements, for example for intention‐to‐treat analysis, may not be available. For example, a non‐negligible proportion of patients with acute pulmonary embolism will die before 30 day measurements on the efficacy of thrombolysis can be obtained. As excluding such patients may introduce bias, alternative analyses, and corresponding means for sample size calculation are needed. We specifically consider power analysis in a randomized clinical trial setting in which the goal is to demonstrate noninferiority of a new treatment as compared to a reference treatment. Also, a nonparametric approach may be needed due to the distribution of the quantitative endpoint of interest. While some approaches have been developed in a composite endpoint setting, our focus is on the continuous endpoint affected by death‐related censoring, for which no approach for noninferiority is available. We propose a solution based on ranking the quantitative outcome and assigning worst rank scores to the patients without quantitative outcome because of death. Based on this, we derive power formulae for a noninferiority test in the presence of death‐censored observations, considering settings with and without ties. The approach is illustrated for an exemplary clinical trial in pulmonary embolism. The results there show a substantial effect of death on power, also depending on differential effects in the two trial arms. Therefore, use of the proposed formulae is advisable whenever there is death to be expected before measurement of a quantitative primary outcome of interest.  相似文献   

9.
Taylor JM  Wang Y  Thiébaut R 《Biometrics》2005,61(4):1102-1111
In a randomized clinical trial, a statistic that measures the proportion of treatment effect on the primary clinical outcome that is explained by the treatment effect on a surrogate outcome is a useful concept. We investigate whether a statistic proposed to estimate this proportion can be given a causal interpretation as defined by models of counterfactual variables. For the situation of binary surrogate and outcome variables, two counterfactual models are considered, both of which include the concept of the proportion of the treatment effect, which acts through the surrogate. In general, the statistic does not equal either of the two proportions from the counterfactual models, and can be substantially different. Conditions are given for which the statistic does equal the counterfactual model proportions. A randomized clinical trial with potential surrogate endpoints is undertaken in a scientific context; this context will naturally place constraints on the parameters of the counterfactual model. We conducted a simulation experiment to investigate what impact these constraints had on the relationship between the proportion explained (PE) statistic and the counterfactual model proportions. We found that observable constraints had very little impact on the agreement between the statistic and the counterfactual model proportions, whereas unobservable constraints could lead to more agreement.  相似文献   

10.
Miller F 《Biometrics》2005,61(2):355-361
We consider clinical studies with a sample size re-estimation based on the unblinded variance estimation at some interim point of the study. Because the sample size is determined in such a flexible way, the usual variance estimator at the end of the trial is biased. We derive sharp bounds for this bias. These bounds have a quite simple form and can help for the decision if this bias is negligible for the actual study or if a correction should be done. An exact formula for the bias is also provided. We discuss possibilities to get rid of this bias or at least to reduce the bias substantially. For this purpose, we propose a certain additive correction of the bias. We see in an example that the significance level of the test can be controlled when this additive correction is used.  相似文献   

11.
Chen H  Geng Z  Zhou XH 《Biometrics》2009,65(3):675-682
Summary .  In this article, we first study parameter identifiability in randomized clinical trials with noncompliance and missing outcomes. We show that under certain conditions the parameters of interest are identifiable even under different types of completely nonignorable missing data: that is, the missing mechanism depends on the outcome. We then derive their maximum likelihood and moment estimators and evaluate their finite-sample properties in simulation studies in terms of bias, efficiency, and robustness. Our sensitivity analysis shows that the assumed nonignorable missing-data model has an important impact on the estimated complier average causal effect (CACE) parameter. Our new method provides some new and useful alternative nonignorable missing-data models over the existing latent ignorable model, which guarantees parameter identifiability, for estimating the CACE in a randomized clinical trial with noncompliance and missing data.  相似文献   

12.
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

13.
We present methods for causally interpretable meta-analyses that combine information from multiple randomized trials to draw causal inferences for a target population of substantive interest. We consider identifiability conditions, derive implications of the conditions for the law of the observed data, and obtain identification results for transporting causal inferences from a collection of independent randomized trials to a new target population in which experimental data may not be available. We propose an estimator for the potential outcome mean in the target population under each treatment studied in the trials. The estimator uses covariate, treatment, and outcome data from the collection of trials, but only covariate data from the target population sample. We show that it is doubly robust in the sense that it is consistent and asymptotically normal when at least one of the models it relies on is correctly specified. We study the finite sample properties of the estimator in simulation studies and demonstrate its implementation using data from a multicenter randomized trial.  相似文献   

14.
In randomized clinical trials where the times to event of two treatment groups are compared under a proportional hazards assumption, it has been established that omitting prognostic factors from the model entails an underestimation of the hazards ratio. Heterogeneity due to unobserved covariates in cancer patient populations is a concern since genomic investigations have revealed molecular and clinical heterogeneity in these populations. In HIV prevention trials, heterogeneity is unavoidable and has been shown to decrease the treatment effect over time. This article assesses the influence of trial duration on the bias of the estimated hazards ratio resulting from omitting covariates from the Cox analysis. The true model is defined by including an unobserved random frailty term in the individual hazard that reflects the omitted covariate. Three frailty distributions are investigated: gamma, log‐normal, and binary, and the asymptotic bias of the hazards ratio estimator is calculated. We show that the attenuation of the treatment effect resulting from unobserved heterogeneity strongly increases with trial duration, especially for continuous frailties that are likely to reflect omitted covariates, as they are often encountered in practice. The possibility of interpreting the long‐term decrease in treatment effects as a bias induced by heterogeneity and trial duration is illustrated by a trial in oncology where adjuvant chemotherapy in stage 1B NSCLC was investigated.  相似文献   

15.
G-estimation of structural nested models (SNMs) plays an important role in estimating the effects of time-varying treatments with appropriate adjustment for time-dependent confounding. As SNMs for a failure time outcome, structural nested accelerated failure time models (SNAFTMs) and structural nested cumulative failure time models have been developed. The latter models are included in the class of structural nested mean models (SNMMs) and are not involved in artificial censoring, which induces several difficulties in g-estimation of SNAFTMs. Recently, restricted mean time lost (RMTL), which corresponds to the area under a distribution function up to a restriction time, is attracting attention in clinical trial communities as an appropriate summary measure of a failure time outcome. In this study, we propose another SNMM for a failure time outcome, which is called structural nested RMTL model (SNRMTLM) and describe randomized and observational g-estimation procedures that use different assumptions for the treatment mechanism in a randomized trial setting. We also provide methods to estimate marginal RMTLs under static treatment regimes using estimated SNRMTLMs. A simulation study evaluates finite-sample performances of the proposed methods compared with the conventional intention-to-treat and per-protocol analyses. We illustrate the proposed methods using data from a randomized controlled trial for cardiovascular disease with treatment changes. G-estimation of SNRMTLMs is a useful tool to estimate the effects of time-varying treatments on a failure time outcome.  相似文献   

16.
Li Z  Murphy SA 《Biometrika》2011,98(3):503-518
Two-stage randomized trials are growing in importance in developing adaptive treatment strategies, i.e. treatment policies or dynamic treatment regimes. Usually, the first stage involves randomization to one of the several initial treatments. The second stage of treatment begins when an early nonresponse criterion or response criterion is met. In the second-stage, nonresponding subjects are re-randomized among second-stage treatments. Sample size calculations for planning these two-stage randomized trials with failure time outcomes are challenging because the variances of common test statistics depend in a complex manner on the joint distribution of time to the early nonresponse criterion or response criterion and the primary failure time outcome. We produce simple, albeit conservative, sample size formulae by using upper bounds on the variances. The resulting formulae only require the working assumptions needed to size a standard single-stage randomized trial and, in common settings, are only mildly conservative. These sample size formulae are based on either a weighted Kaplan-Meier estimator of survival probabilities at a fixed time-point or a weighted version of the log-rank test.  相似文献   

17.
Suppose we are interested in the effect of a treatment in a clinical trial. The efficiency of inference may be limited due to small sample size. However, external control data are often available from historical studies. Motivated by an application to Helicobacter pylori infection, we show how to borrow strength from such data to improve efficiency of inference in the clinical trial. Under an exchangeability assumption about the potential outcome mean, we show that the semiparametric efficiency bound for estimating the average treatment effect can be reduced by incorporating both the clinical trial data and external controls. We then derive a doubly robust and locally efficient estimator. The improvement in efficiency is prominent especially when the external control data set has a large sample size and small variability. Our method allows for a relaxed overlap assumption, and we illustrate with the case where the clinical trial only contains a treated group. We also develop doubly robust and locally efficient approaches that extrapolate the causal effect in the clinical trial to the external population and the overall population. Our results also offer a meaningful implication for trial design and data collection. We evaluate the finite-sample performance of the proposed estimators via simulation. In the Helicobacter pylori infection application, our approach shows that the combination treatment has potential efficacy advantages over the triple therapy.  相似文献   

18.
This article is concerned with drawing inference about aspects of the population distribution of ordinal outcome data measured on a cohort of individuals on two occasions, where some subjects are missing their second measurement. We present two complementary approaches for constructing bounds under assumptions on the missing data mechanism considered plausible by scientific experts. We develop our methodology within the context of a randomized trial of the "Good Behavior Game," an intervention designed to reduce aggressive misbehavior among children.  相似文献   

19.
Summary. We derive the optimal allocation between two treatments in a clinical trial based on the following optimality criterion: for fixed variance of the test statistic, what allocation minimizes the expected number of treatment failures? A sequential design is described that leads asymptotically to the optimal allocation and is compared with the randomized play‐the‐winner rule, sequential Neyman allocation, and equal allocation at similar power levels. We find that the sequential procedure generally results in fewer treatment failures than the other procedures, particularly when the success probabilities of treatments are smaller.  相似文献   

20.
For a prospective randomized clinical trial with two groups, the relative risk can be used as a measure of treatment effect and is directly interpretable as the ratio of success probabilities in the new treatment group versus the placebo group. For a prospective study with many covariates and a binary outcome (success or failure), relative risk regression may be of interest. If we model the log of the success probability as a linear function of covariates, the regression coefficients are log-relative risks. However, using such a log-linear model with a Bernoulli likelihood can lead to convergence problems in the Newton-Raphson algorithm. This is likely to occur when the success probabilities are close to one. A constrained likelihood method proposed by Wacholder (1986, American Journal of Epidemiology 123, 174-184), also has convergence problems. We propose a quasi-likelihood method of moments technique in which we naively assume the Bernoulli outcome is Poisson, with the mean (success probability) following a log-linear model. We use the Poisson maximum likelihood equations to estimate the regression coefficients without constraints. Using method of moment ideas, one can show that the estimates using the Poisson likelihood will be consistent and asymptotically normal. We apply these methods to a double-blinded randomized trial in primary biliary cirrhosis of the liver (Markus et al., 1989, New England Journal of Medicine 320, 1709-1713).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号