共查询到20条相似文献,搜索用时 0 毫秒
1.
Vaccines with limited ability to prevent HIV infection may positively impact the HIV/AIDS pandemic by preventing secondary transmission and disease in vaccine recipients who become infected. To evaluate the impact of vaccination on secondary transmission and disease, efficacy trials assess vaccine effects on HIV viral load and other surrogate endpoints measured after infection. A standard test that compares the distribution of viral load between the infected subgroups of vaccine and placebo recipients does not assess a causal effect of vaccine, because the comparison groups are selected after randomization. To address this problem, we formulate clinically relevant causal estimands using the principal stratification framework developed by Frangakis and Rubin (2002, Biometrics 58, 21-29), and propose a class of logistic selection bias models whose members identify the estimands. Given a selection model in the class, procedures are developed for testing and estimation of the causal effect of vaccination on viral load in the principal stratum of subjects who would be infected regardless of randomization assignment. We show how the procedures can be used for a sensitivity analysis that quantifies how the causal effect of vaccination varies with the presumed magnitude of selection bias. 相似文献
2.
The fraction who benefit from treatment is the proportion of patients whose potential outcome under treatment is better than that under control. Inference on this parameter is challenging since it is only partially identifiable, even in our context of a randomized trial. We propose a new method for constructing a confidence interval for the fraction, when the outcome is ordinal or binary. Our confidence interval procedure is pointwise consistent. It does not require any assumptions about the joint distribution of the potential outcomes, although it has the flexibility to incorporate various user‐defined assumptions. Our method is based on a stochastic optimization technique involving a second‐order, asymptotic approximation that, to the best of our knowledge, has not been applied to biomedical studies. This approximation leads to statistics that are solutions to quadratic programs, which can be computed efficiently using optimization tools. In simulation, our method attains the nominal coverage probability or higher, and can have narrower average width than competitor methods. We apply it to a trial of a new intervention for stroke. 相似文献
3.
For ordinal outcomes, the average treatment effect is often ill-defined and hard to interpret. Echoing Agresti and Kateri, we argue that the relative treatment effect can be a useful measure, especially for ordinal outcomes, which is defined as , with and being the potential outcomes of unit under treatment and control, respectively. Given the marginal distributions of the potential outcomes, we derive the sharp bounds on which are identifiable parameters based on the observed data. Agresti and Kateri focused on modeling strategies under the assumption of independent potential outcomes, but we allow for arbitrary dependence. 相似文献
4.
Kim Luijken;Rolf H. H. Groenwold;Maarten van Smeden;Susanne Strohmaier;Georg Heinze; 《Biometrical journal. Biometrische Zeitschrift》2024,66(1):2100237
A common view in epidemiology is that automated confounder selection methods, such as backward elimination, should be avoided as they can lead to biased effect estimates and underestimation of their variance. Nevertheless, backward elimination remains regularly applied. We investigated if and under which conditions causal effect estimation in observational studies can improve by using backward elimination on a prespecified set of potential confounders. An expression was derived that quantifies how variable omission relates to bias and variance of effect estimators. Additionally, 3960 scenarios were defined and investigated by simulations comparing bias and mean squared error (MSE) of the conditional log odds ratio, log(cOR), and the marginal log risk ratio, log(mRR), between full models including all prespecified covariates and backward elimination of these covariates. Applying backward elimination resulted in a mean bias of 0.03 for log(cOR) and 0.02 for log(mRR), compared to 0.56 and 0.52 for log(cOR) and log(mRR), respectively, for a model without any covariate adjustment, and no bias for the full model. In less than 3% of the scenarios considered, the MSE of the log(cOR) or log(mRR) was slightly lower (max 3%) when backward elimination was used compared to the full model. When an initial set of potential confounders can be specified based on background knowledge, there is minimal added value of backward elimination. We advise not to use it and otherwise to provide ample arguments supporting its use. 相似文献
5.
Stephen R. Cole Jessie K. Edwards Daniel Westreich Catherine R. Lesko Bryan Lau Michael J. Mugavero W. Christopher Mathews Joseph J. Eron Jr. Sander Greenland for the CNICS Investigators 《Biometrical journal. Biometrische Zeitschrift》2018,60(1):100-114
Marginal structural models for time‐fixed treatments fit using inverse‐probability weighted estimating equations are increasingly popular. Nonetheless, the resulting effect estimates are subject to finite‐sample bias when data are sparse, as is typical for large‐sample procedures. Here we propose a semi‐Bayes estimation approach which penalizes or shrinks the estimated model parameters to improve finite‐sample performance. This approach uses simple symmetric data‐augmentation priors. Limited simulation experiments indicate that the proposed approach reduces finite‐sample bias and improves confidence‐interval coverage when the true values lie within the central “hill” of the prior distribution. We illustrate the approach with data from a nonexperimental study of HIV treatments. 相似文献
6.
In evolutionary genomics, it is fundamentally important to understand how characteristics of genomic sequences, such as gene expression level, determine the rate of adaptive evolution. While numerous statistical methods, such as the McDonald–Kreitman (MK) test, are available to examine the association between genomic features and the rate of adaptation, we currently lack a statistical approach to disentangle the independent effect of a genomic feature from the effects of other correlated genomic features. To address this problem, I present a novel statistical model, the MK regression, which augments the MK test with a generalized linear model. Analogous to the classical multiple regression model, the MK regression can analyze multiple genomic features simultaneously to infer the independent effect of a genomic feature, holding constant all other genomic features. Using the MK regression, I identify numerous genomic features driving positive selection in chimpanzees. These features include well-known ones, such as local mutation rate, residue exposure level, tissue specificity, and immune genes, as well as new features not previously reported, such as gene expression level and metabolic genes. In particular, I show that highly expressed genes may have a higher adaptation rate than their weakly expressed counterparts, even though a higher expression level may impose stronger negative selection. Also, I show that metabolic genes may have a higher adaptation rate than their nonmetabolic counterparts, possibly due to recent changes in diet in primate evolution. Overall, the MK regression is a powerful approach to elucidate the genomic basis of adaptation. 相似文献
7.
SUMMARY: Frangakis and Rubin (2002, Biometrics 58, 21-29) proposed a new definition of a surrogate endpoint (a \"principal\" surrogate) based on causal effects. We introduce an estimand for evaluating a principal surrogate, the causal effect predictiveness (CEP) surface, which quantifies how well causal treatment effects on the biomarker predict causal treatment effects on the clinical endpoint. Although the CEP surface is not identifiable due to missing potential outcomes, it can be identified by incorporating a baseline covariate(s) that predicts the biomarker. Given case-cohort sampling of such a baseline predictor and the biomarker in a large blinded randomized clinical trial, we develop an estimated likelihood method for estimating the CEP surface. This estimation assesses the \"surrogate value\" of the biomarker for reliably predicting clinical treatment effects for the same or similar setting as the trial. A CEP surface plot provides a way to compare the surrogate value of multiple biomarkers. The approach is illustrated by the problem of assessing an immune response to a vaccine as a surrogate endpoint for infection. 相似文献
8.
Bruno D. Valente Gota Morota Francisco Pe?agaricano Daniel Gianola Kent Weigel Guilherme J. M. Rosa 《Genetics》2015,200(2):483-494
The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. 相似文献
9.
After variable selection, standard inferential procedures for regression parameters may not be uniformly valid; there is no finite-sample size at which a standard test is guaranteed to approximately attain its nominal size. This problem is exacerbated in high-dimensional settings, where variable selection becomes unavoidable. This has prompted a flurry of activity in developing uniformly valid hypothesis tests for a low-dimensional regression parameter (eg, the causal effect of an exposure A on an outcome Y) in high-dimensional models. So far there has been limited focus on model misspecification, although this is inevitable in high-dimensional settings. We propose tests of the null that are uniformly valid under sparsity conditions weaker than those typically invoked in the literature, assuming working models for the exposure and outcome are both correctly specified. When one of the models is misspecified, by amending the procedure for estimating the nuisance parameters, our tests continue to be valid; hence, they are doubly robust. Our proposals are straightforward to implement using existing software for penalized maximum likelihood estimation and do not require sample splitting. We illustrate them in simulations and an analysis of data obtained from the Ghent University intensive care unit. 相似文献
10.
Shortreed and Ertefaie introduced a clever propensity score variable selection approach for estimating average causal effects, namely, the outcome adaptive lasso (OAL). OAL aims to select desirable covariates, confounders, and predictors of outcome, to build an unbiased and statistically efficient propensity score estimator. Due to its design, a potential limitation of OAL is how it handles the collinearity problem, which is often encountered in high-dimensional data. As seen in Shortreed and Ertefaie, OAL's performance degraded with increased correlation between covariates. In this note, we propose the generalized OAL (GOAL) that combines the strengths of the adaptively weighted L1 penalty and the elastic net to better handle the selection of correlated covariates. Two different versions of GOAL, which differ in their procedure (algorithm), are proposed. We compared OAL and GOAL in simulation scenarios that mimic those examined by Shortreed and Ertefaie. Although all approaches performed equivalently with independent covariates, we found that both GOAL versions were more performant than OAL in low and high dimensions with correlated covariates. 相似文献
11.
Causal inference is widely used in various fields, such as biology, psychology, and economics, etc. In observational studies, balancing the covariates is an important step in estimating the causal effect. This study extends the one-dimensional entropy balancing method to multiple dimensions to balance the covariates. Both parametric and nonparametric methods are proposed to estimate the causal effect of multivariate continuous treatments and theoretical properties of the two estimations are provided. Furthermore, the simulation results show that the proposed method is better than other methods in various cases. Finally, the proposed method is applied to analyze the impact of the duration and frequency of smoking on medical expenditure. The results from the parametric method indicate that the frequency of smoking increases medical expenditure while the duration of smoking does not. The results from the nonparametric method indicate that there is a short-term downward trend and then a long-term upward trend as the duration and frequency of smoking increase. 相似文献
12.
Summary . Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine. 相似文献
13.
Studies of social networks provide unique opportunities to assess the causal effects of interventions that may impact more of the population than just those intervened on directly. Such effects are sometimes called peer or spillover effects, and may exist in the presence of interference, that is, when one individual's treatment affects another individual's outcome. Randomization-based inference (RI) methods provide a theoretical basis for causal inference in randomized studies, even in the presence of interference. In this article, we consider RI of the intervention effect in the eX-FLU trial, a randomized study designed to assess the effect of a social distancing intervention on influenza-like-illness transmission in a connected network of college students. The approach considered enables inference about the effect of the social distancing intervention on the per-contact probability of influenza-like-illness transmission in the observed network. The methods allow for interference between connected individuals and for heterogeneous treatment effects. The proposed methods are evaluated empirically via simulation studies, and then applied to data from the eX-FLU trial. 相似文献
14.
In causal mediation studies that decompose an average treatment effect into indirect and direct effects, examples of posttreatment confounding are abundant. In the presence of treatment-by-mediator interactions, past research has generally considered it infeasible to adjust for a posttreatment confounder of the mediator–outcome relationship due to incomplete information: for any given individual, a posttreatment confounder is observed under the actual treatment condition while missing under the counterfactual treatment condition. This paper proposes a new sensitivity analysis strategy for handling posttreatment confounding and incorporates it into weighting-based causal mediation analysis. The key is to obtain the conditional distribution of the posttreatment confounder under the counterfactual treatment as a function of not only pretreatment covariates but also its counterpart under the actual treatment. The sensitivity analysis then generates a bound for the natural indirect effect and that for the natural direct effect over a plausible range of the conditional correlation between the posttreatment confounder under the actual and that under the counterfactual conditions. Implemented through either imputation or integration, the strategy is suitable for binary as well as continuous measures of posttreatment confounders. Simulation results demonstrate major strengths and potential limitations of this new solution. A reanalysis of the National Evaluation of Welfare-to-Work Strategies (NEWWS) Riverside data reveals that the initial analytic results are sensitive to omitted posttreatment confounding. 相似文献
15.
The case-crossover design of Maclure is widely used in epidemiology and other fields to study causal effects of transient treatments on acute outcomes. However, its validity and causal interpretation have only been justified under informal conditions. Here, we place the design in a formal counterfactual framework for the first time. Doing so helps to clarify its assumptions and interpretation. In particular, when the treatment effect is nonnull, we identify a previously unnoticed bias arising from strong common causes of the outcome at different person-times. We analyze this bias and demonstrate its potential importance with simulations. We also use our derivation of the limit of the case-crossover estimator to analyze its sensitivity to treatment effect heterogeneity, a violation of one of the informal criteria for validity. The upshot of this work for practitioners is that, while the case-crossover design can be useful for testing the causal null hypothesis in the presence of baseline confounders, extra caution is warranted when using the case-crossover design for point estimation of causal effects. 相似文献
16.
17.
Richard J. Smith 《American journal of physical anthropology》2019,169(4):591-598
The establishment of cause and effect relationships is a fundamental objective of scientific research. Many lines of evidence can be used to make cause–effect inferences. When statistical data are involved, alternative explanations for the statistical relationship need to be ruled out. These include chance (apparent patterns due to random factors), confounding effects (a relationship between two variables because they are each associated with an unmeasured third variable), and sampling bias (effects due to preexisting properties of compared groups). The gold standard for managing these issues is a controlled randomized experiment. In disciplines such as biological anthropology, where controlled experiments are not possible for many research questions, causal inferences are made from observational data. Methods that statisticians recommend for this difficult objective have not been widely adopted in the biological anthropology literature. Issues involved in using statistics to make valid causal inferences from observational data are discussed. 相似文献
18.
The hazard ratio (HR) is often reported as the main causal effect when studying survival data. Despite its popularity, the HR suffers from an unclear causal interpretation. As already pointed out in the literature, there is a built-in selection bias in the HR, because similarly to the truncation by death problem, the HR conditions on post-treatment survival. A recently proposed alternative, inspired by the Survivor Average Causal Effect, is the causal HR, defined as the ratio between hazards across treatment groups among the study participants that would have survived regardless of their treatment assignment. We discuss the challenge in identifying the causal HR and present a sensitivity analysis identification approach in randomized controlled trials utilizing a working frailty model. We further extend our framework to adjust for potential confounders using inverse probability of treatment weighting. We present a Cox-based and a flexible non-parametric kernel-based estimation under right censoring. We study the finite-sample properties of the proposed estimation methods through simulations. We illustrate the utility of our framework using two real-data examples. 相似文献
19.
It is widely known that Instrumental Variable (IV) estimation allows the researcher to estimate causal effects between an exposure and an outcome even in face of serious uncontrolled confounding. The key requirement for IV estimation is the existence of a variable, the instrument, which only affects the outcome through its effects on the exposure and that the instrument–outcome relationship is unconfounded. Countless papers have employed such techniques and carefully addressed the validity of the IV assumption just mentioned. However, less appreciated is that fact that the IV estimation also depends on a number of distributional assumptions in particular linearities. In this paper, we propose a novel bounding procedure which can bound the true causal effect relying only on the key IV assumption and not on any distributional assumptions. For a purely binary case (instrument, exposure, and outcome all binary), such boundaries have been proposed by Balke and Pearl in 1997. We extend such boundaries to non-binary settings. In addition, our procedure offers a tuning parameter such that one can go from the traditional IV analysis, which provides a point estimate, to a completely unrestricted bound and anything in between. Subject matter knowledge can be used when setting the tuning parameter. To the best of our knowledge, no such methods exist elsewhere. The method is illustrated using a pivotal study which introduced IV estimation to epidemiologists. Here, we demonstrate that the conclusion of this paper indeed hinges on these additional distributional assumptions. R-code is provided in the Supporting Information. 相似文献
20.
教育中的同伴效应是指宿舍、班级、年级或学校内同伴的背景、行为及产出对学生产出或行为的影响。教育中的同伴效应起初使用的是同质性模型,即认为无论个体如何选择同伴,总效益是不变的,同伴效应是一种零和现象。进而发展到异质性模型,即认为同伴效应对不同个体的作用结果是不一样的,通过合理的分配能够提高总效益。研究方法则从以普通最小二乘法为主的统计关联研究,发展到借助于随机实验、自然实验以及准实验的因果推断研究。同伴效应的研究为正确地认识和评价相关教育政策,获得最优组织学校教学的方式,提高教学效率提供了依据。 相似文献