首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
In studies based on electronic health records (EHR), the frequency of covariate monitoring can vary by covariate type, across patients, and over time, which can limit the generalizability of inferences about the effects of adaptive treatment strategies. In addition, monitoring is a health intervention in itself with costs and benefits, and stakeholders may be interested in the effect of monitoring when adopting adaptive treatment strategies. This paper demonstrates how to exploit nonsystematic covariate monitoring in EHR‐based studies to both improve the generalizability of causal inferences and to evaluate the health impact of monitoring when evaluating adaptive treatment strategies. Using a real world, EHR‐based, comparative effectiveness research (CER) study of patients with type II diabetes mellitus, we illustrate how the evaluation of joint dynamic treatment and static monitoring interventions can improve CER evidence and describe two alternate estimation approaches based on inverse probability weighting (IPW). First, we demonstrate the poor performance of the standard estimator of the effects of joint treatment‐monitoring interventions, due to a large decrease in data support and concerns over finite‐sample bias from near‐violations of the positivity assumption (PA) for the monitoring process. Second, we detail an alternate IPW estimator using a no direct effect assumption. We demonstrate that this estimator can improve efficiency but at the potential cost of increase in bias from violations of the PA for the treatment process.  相似文献   

2.
    
One of the most common ways researchers compare cancer survival outcomes across treatments from observational data is using Cox regression. This model depends on its underlying assumption of proportional hazards, but in some real-world cases, such as when comparing different classes of cancer therapies, substantial violations may occur. In this situation, researchers have several alternative methods to choose from, including Cox models with time-varying hazard ratios; parametric accelerated failure time models; Kaplan–Meier curves; and pseudo-observations. It is unclear which of these models are likely to perform best in practice. To fill this gap in the literature, we perform a neutral comparison study of candidate approaches. We examine clinically meaningful outcome measures that can be computed and directly compared across each method, namely, survival probability at time T, median survival, and restricted mean survival. To adjust for differences between treatment groups, we use inverse probability of treatment weighting based on the propensity score. We conduct simulation studies under a range of scenarios, and determine the biases, coverages, and standard errors of the average treatment effects for each method. We then demonstrate the use of these approaches using two published observational studies of survival after cancer treatment. The first examines chemotherapy in sarcoma, which has a late treatment effect (i.e., similar survival initially, but after 2 years the chemotherapy group shows a benefit). The other study is a comparison of surgical techniques for kidney cancer, where survival differences are attenuated over time.  相似文献   

3.
    
Marginal structural models (MSMs) are an increasingly popular tool, particularly in epidemiological applications, to handle the problem of time‐varying confounding by intermediate variables when studying the effect of sequences of exposures. Considerable attention has been devoted to the optimal choice of treatment model for propensity score‐based methods and, more recently, to variable selection in the treatment model for inverse weighting in MSMs. However, little attention has been paid to the modeling of the outcome of interest, particularly with respect to the best use of purely predictive, non‐confounding variables in MSMs. Four modeling approaches are investigated in the context of both static treatment sequences and optimal dynamic treatment rules with the goal of estimating a marginal effect with the least error, both in terms of bias and variability.  相似文献   

4.
    
In recent years there have been a series of advances in the field of dynamic prediction. Among those is the development of methods for dynamic prediction of the cumulative incidence function in a competing risk setting. These models enable the predictions to be updated as time progresses and more information becomes available, for example when a patient comes back for a follow‐up visit after completing a year of treatment, the risk of death, and adverse events may have changed since treatment initiation. One approach to model the cumulative incidence function in competing risks is by direct binomial regression, where right censoring of the event times is handled by inverse probability of censoring weights. We extend the approach by combining it with landmarking to enable dynamic prediction of the cumulative incidence function. The proposed models are very flexible, as they allow the covariates to have complex time‐varying effects, and we illustrate how to investigate possible time‐varying structures using Wald tests. The models are fitted using generalized estimating equations. The method is applied to bone marrow transplant data and the performance is investigated in a simulation study.  相似文献   

5.
6.
    
Personalized medicine optimizes patient outcome by tailoring treatments to patient‐level characteristics. This approach is formalized by dynamic treatment regimes (DTRs): decision rules that take patient information as input and output recommended treatment decisions. The DTR literature has seen the development of increasingly sophisticated causal inference techniques that attempt to address the limitations of our typically observational datasets. Often overlooked, however, is that in practice most patients may be expected to receive optimal or near‐optimal treatment, and so the outcome used as part of a typical DTR analysis may provide limited information. In light of this, we propose considering a more standard analysis: ignore the outcome and elicit an optimal DTR by modeling the observed treatment as a function of relevant covariates. This offers a far simpler analysis and, in some settings, improved optimal treatment identification. To distinguish this approach from more traditional DTR analyses, we term it reward ignorant modeling, and also introduce the concept of multimethod analysis, whereby different analysis methods are used in settings with multiple treatment decisions. We demonstrate this concept through a variety of simulation studies, and through analysis of data from the International Warfarin Pharmacogenetics Consortium, which also serve as motivation for this work.  相似文献   

7.
    
In this paper, we investigate K‐group comparisons on survival endpoints for observational studies. In clinical databases for observational studies, treatment for patients are chosen with probabilities varying depending on their baseline characteristics. This often results in noncomparable treatment groups because of imbalance in baseline characteristics of patients among treatment groups. In order to overcome this issue, we conduct propensity analysis and match the subjects with similar propensity scores across treatment groups or compare weighted group means (or weighted survival curves for censored outcome variables) using the inverse probability weighting (IPW). To this end, multinomial logistic regression has been a popular propensity analysis method to estimate the weights. We propose to use decision tree method as an alternative propensity analysis due to its simplicity and robustness. We also propose IPW rank statistics, called Dunnett‐type test and ANOVA‐type test, to compare 3 or more treatment groups on survival endpoints. Using simulations, we evaluate the finite sample performance of the weighted rank statistics combined with these propensity analysis methods. We demonstrate these methods with a real data example. The IPW method also allows us for unbiased estimation of population parameters of each treatment group. In this paper, we limit our discussions to survival outcomes, but all the methods can be easily modified for any type of outcomes, such as binary or continuous variables.  相似文献   

8.
    
In many randomized clinical trials of therapeutics for COVID-19, the primary outcome is an ordinal categorical variable, and interest focuses on the odds ratio (OR; active agent vs control) under the assumption of a proportional odds model. Although at the final analysis the outcome will be determined for all subjects, at an interim analysis, the status of some participants may not yet be determined, for example, because ascertainment of the outcome may not be possible until some prespecified follow-up time. Accordingly, the outcome from these subjects can be viewed as censored. A valid interim analysis can be based on data only from those subjects with full follow-up; however, this approach is inefficient, as it does not exploit additional information that may be available on those for whom the outcome is not yet available at the time of the interim analysis. Appealing to the theory of semiparametrics, we propose an estimator for the OR in a proportional odds model with censored, time-lagged categorical outcome that incorporates additional baseline and time-dependent covariate information and demonstrate that it can result in considerable gains in efficiency relative to simpler approaches. A byproduct of the approach is a covariate-adjusted estimator for the OR based on the full data that would be available at a final analysis.  相似文献   

9.
    
Inverse‐probability‐of‐treatment weighted (IPTW) estimation has been widely used to consistently estimate the causal parameters in marginal structural models, with time‐dependent confounding effects adjusted for. Just like other causal inference methods, the validity of IPTW estimation typically requires the crucial condition that all variables are precisely measured. However, this condition, is often violated in practice due to various reasons. It has been well documented that ignoring measurement error often leads to biased inference results. In this paper, we consider the IPTW estimation of the causal parameters in marginal structural models in the presence of error‐contaminated and time‐dependent confounders. We explore several methods to correct for the effects of measurement error on the estimation of causal parameters. Numerical studies are reported to assess the finite sample performance of the proposed methods.  相似文献   

10.
    
This paper introduces a novel approach to estimating censored quantile regression using inverse probability of censoring weighted (IPCW) methodology, specifically tailored for data sets featuring partially interval-censored data. Such data sets, often encountered in HIV/AIDS and cancer biomedical research, may include doubly censored (DC) and partly interval-censored (PIC) endpoints. DC responses involve either left-censoring or right-censoring alongside some exact failure time observations, while PIC responses are subject to interval-censoring. Despite the existence of complex estimating techniques for interval-censored quantile regression, we propose a simple and intuitive IPCW-based method, easily implementable by assigning suitable inverse-probability weights to subjects with exact failure time observations. The resulting estimator exhibits asymptotic properties, such as uniform consistency and weak convergence, and we explore an augmented-IPCW (AIPCW) approach to enhance efficiency. In addition, our method can be adapted for multivariate partially interval-censored data. Simulation studies demonstrate the new procedure's strong finite-sample performance. We illustrate the practical application of our approach through an analysis of progression-free survival endpoints in a phase III clinical trial focusing on metastatic colorectal cancer.  相似文献   

11.
A mathematical model is presented for the dynamics of a spatially heterogeneous predator-prey population system; a prototype is the Syamozero lake fish community. We show that the invasion of an intermediate predator can evoke chaotic oscillations in the population densities. We also show that different dynamic regimes (stationary, nonchaotic oscillatory, and chaotic) can coexist. The “choice” of a particular regime depends on the initial invader density. Analysis of the model solutions shows that invasion of an alien species is successful only in the absence of competition between the juvenile invaders and the native species.  相似文献   

12.
    
In observational cohort studies with complex sampling schemes, truncation arises when the time to event of interest is observed only when it falls below or exceeds another random time, that is, the truncation time. In more complex settings, observation may require a particular ordering of event times; we refer to this as sequential truncation. Estimators of the event time distribution have been developed for simple left-truncated or right-truncated data. However, these estimators may be inconsistent under sequential truncation. We propose nonparametric and semiparametric maximum likelihood estimators for the distribution of the event time of interest in the presence of sequential truncation, under two truncation models. We show the equivalence of an inverse probability weighted estimator and a product limit estimator under one of these models. We study the large sample properties of the proposed estimators and derive their asymptotic variance estimators. We evaluate the proposed methods through simulation studies and apply the methods to an Alzheimer's disease study. We have developed an R package, seqTrun , for implementation of our method.  相似文献   

13.
    
Estimating the degree of sexual dimorphism is difficult in fossil species because most specimens lack indicators of sex. We present a procedure that estimates sexual dimorphism in samples of unknown sex using method-of-moments. We assume that the distribution of a metric trait is composed of two underlying normal distributions, one for males and one for females. We use three moments around the mean of the combined-sex distribution to estimate the means and the common standard deviation of the two underlying distributions. This procedure has advantages over previous methods: it is relatively simple to use, specimens need not be assigned to sex a priori, no reference to living species analogs is required, and the method provides conservative estimates of dimorphism under a variety of conditions. The method performs best when the male and female distributions overlap minimally but also works well when overlap is substantial. Simulations indicate that this relatively simple method is more accurate and reliable than previous methods for estimating dimorphism. © 1996 Wiley-Liss, Inc.  相似文献   

14.
Qu  Annie; Lindsay  Bruce G.; Li  Bing 《Biometrika》2000,87(4):823-836
  相似文献   

15.
    
Personalized intervention strategies, in particular those that modify treatment based on a participant's own response, are a core component of precision medicine approaches. Sequential multiple assignment randomized trials (SMARTs) are growing in popularity and are specifically designed to facilitate the evaluation of sequential adaptive strategies, in particular those embedded within the SMART. Advances in efficient estimation approaches that are able to incorporate machine learning while retaining valid inference can allow for more precise estimates of the effectiveness of these embedded regimes. However, to the best of our knowledge, such approaches have not yet been applied as the primary analysis in SMART trials. In this paper, we present a robust and efficient approach using targeted maximum likelihood estimation (TMLE) for estimating and contrasting expected outcomes under the dynamic regimes embedded in a SMART, together with generating simultaneous confidence intervals for the resulting estimates. We contrast this method with two alternatives (G-computation and inverse probability weighting estimators). The precision gains and robust inference achievable through the use of TMLE to evaluate the effects of embedded regimes are illustrated using both outcome-blind simulations and a real-data analysis from the Adaptive Strategies for Preventing and Treating Lapses of Retention in Human Immunodeficiency Virus (HIV) Care (ADAPT-R) trial (NCT02338739), a SMART with a primary aim of identifying strategies to improve retention in HIV care among people living with HIV in sub-Saharan Africa.  相似文献   

16.
    
  相似文献   

17.
    
In this paper, we consider the estimation of prediction errors for state occupation probabilities and transition probabilities for multistate time‐to‐event data. We study prediction errors based on the Brier score and on the Kullback–Leibler score and prove their properness. In the presence of right‐censored data, two classes of estimators, based on inverse probability weighting and pseudo‐values, respectively, are proposed, and consistency properties of the proposed estimators are investigated. The second part of the paper is devoted to the estimation of dynamic prediction errors for state occupation probabilities for multistate models, conditional on being alive, and for transition probabilities. Cross‐validated versions are proposed. Our methods are illustrated on the CSL1 randomized clinical trial comparing prednisone versus placebo for liver cirrhosis patients.  相似文献   

18.
    
Dynamic treatment regimes (DTRs) aim to formalize personalized medicine by tailoring treatment decisions to individual patient characteristics. G‐estimation for DTR identification targets the parameters of a structural nested mean model, known as the blip function, from which the optimal DTR is derived. Despite its potential, G‐estimation has not seen widespread use in the literature, owing in part to its often complex presentation and implementation, but also due to the necessity for correct specification of the blip. Using a quadratic approximation approach inspired by iteratively reweighted least squares, we derive a quasi‐likelihood function for G‐estimation within the DTR framework, and show how it can be used to form an information criterion for blip model selection. We outline the theoretical properties of this model selection criterion and demonstrate its application in a variety of simulation studies as well as in data from the Sequenced Treatment Alternatives to Relieve Depression study.  相似文献   

19.
    
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines.  相似文献   

20.
    
This article develops hypothesis testing procedures for the stratified mark‐specific proportional hazards model with missing covariates where the baseline functions may vary with strata. The mark‐specific proportional hazards model has been studied to evaluate mark‐specific relative risks where the mark is the genetic distance of an infecting HIV sequence to an HIV sequence represented inside the vaccine. This research is motivated by analyzing the RV144 phase 3 HIV vaccine efficacy trial, to understand associations of immune response biomarkers on the mark‐specific hazard of HIV infection, where the biomarkers are sampled via a two‐phase sampling nested case‐control design. We test whether the mark‐specific relative risks are unity and how they change with the mark. The developed procedures enable assessment of whether risk of HIV infection with HIV variants close or far from the vaccine sequence are modified by immune responses induced by the HIV vaccine; this question is interesting because vaccine protection occurs through immune responses directed at specific HIV sequences. The test statistics are constructed based on augmented inverse probability weighted complete‐case estimators. The asymptotic properties and finite‐sample performances of the testing procedures are investigated, demonstrating double‐robustness and effectiveness of the predictive auxiliaries to recover efficiency. The finite‐sample performance of the proposed tests are examined through a comprehensive simulation study. The methods are applied to the RV144 trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号