首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Data-driven methods for personalizing treatment assignment have garnered much attention from clinicians and researchers. Dynamic treatment regimes formalize this through a sequence of decision rules that map individual patient characteristics to a recommended treatment. Observational studies are commonly used for estimating dynamic treatment regimes due to the potentially prohibitive costs of conducting sequential multiple assignment randomized trials. However, estimating a dynamic treatment regime from observational data can lead to bias in the estimated regime due to unmeasured confounding. Sensitivity analyses are useful for assessing how robust the conclusions of the study are to a potential unmeasured confounder. A Monte Carlo sensitivity analysis is a probabilistic approach that involves positing and sampling from distributions for the parameters governing the bias. We propose a method for performing a Monte Carlo sensitivity analysis of the bias due to unmeasured confounding in the estimation of dynamic treatment regimes. We demonstrate the performance of the proposed procedure with a simulation study and apply it to an observational study examining tailoring the use of antidepressant medication for reducing symptoms of depression using data from Kaiser Permanente Washington.  相似文献   

2.
Dynamic treatment regimes (DTRs) aim to formalize personalized medicine by tailoring treatment decisions to individual patient characteristics. G‐estimation for DTR identification targets the parameters of a structural nested mean model, known as the blip function, from which the optimal DTR is derived. Despite its potential, G‐estimation has not seen widespread use in the literature, owing in part to its often complex presentation and implementation, but also due to the necessity for correct specification of the blip. Using a quadratic approximation approach inspired by iteratively reweighted least squares, we derive a quasi‐likelihood function for G‐estimation within the DTR framework, and show how it can be used to form an information criterion for blip model selection. We outline the theoretical properties of this model selection criterion and demonstrate its application in a variety of simulation studies as well as in data from the Sequenced Treatment Alternatives to Relieve Depression study.  相似文献   

3.
Personalized medicine optimizes patient outcome by tailoring treatments to patient‐level characteristics. This approach is formalized by dynamic treatment regimes (DTRs): decision rules that take patient information as input and output recommended treatment decisions. The DTR literature has seen the development of increasingly sophisticated causal inference techniques that attempt to address the limitations of our typically observational datasets. Often overlooked, however, is that in practice most patients may be expected to receive optimal or near‐optimal treatment, and so the outcome used as part of a typical DTR analysis may provide limited information. In light of this, we propose considering a more standard analysis: ignore the outcome and elicit an optimal DTR by modeling the observed treatment as a function of relevant covariates. This offers a far simpler analysis and, in some settings, improved optimal treatment identification. To distinguish this approach from more traditional DTR analyses, we term it reward ignorant modeling, and also introduce the concept of multimethod analysis, whereby different analysis methods are used in settings with multiple treatment decisions. We demonstrate this concept through a variety of simulation studies, and through analysis of data from the International Warfarin Pharmacogenetics Consortium, which also serve as motivation for this work.  相似文献   

4.
5.
The field of precision medicine aims to tailor treatment based on patient-specific factors in a reproducible way. To this end, estimating an optimal individualized treatment regime (ITR) that recommends treatment decisions based on patient characteristics to maximize the mean of a prespecified outcome is of particular interest. Several methods have been proposed for estimating an optimal ITR from clinical trial data in the parallel group setting where each subject is randomized to a single intervention. However, little work has been done in the area of estimating the optimal ITR from crossover study designs. Such designs naturally lend themselves to precision medicine since they allow for observing the response to multiple treatments for each patient. In this paper, we introduce a method for estimating the optimal ITR using data from a 2 × 2 crossover study with or without carryover effects. The proposed method is similar to policy search methods such as outcome weighted learning; however, we take advantage of the crossover design by using the difference in responses under each treatment as the observed reward. We establish Fisher and global consistency, present numerical experiments, and analyze data from a feeding trial to demonstrate the improved performance of the proposed method compared to standard methods for a parallel study design.  相似文献   

6.
7.
8.
In estimating optimal adaptive treatment strategies, the tailor treatment variables used for patient profiles are typically hand-picked by experts. However these variables may not yield an estimated optimal dynamic regime that is close to the optimal regime which uses all variables. The question of selecting tailoring variables has not yet been answered satisfactorily, though promising new approaches have been proposed. We compare the use of reducts--a variable selection tool from computer sciences--to the S-score criterion proposed by Gunter and colleagues in 2007 for suggesting collections of useful variables for treatment regime tailoring. Although the reducts-based approach promised several advantages such as the ability to account for correlation among tailoring variables, it proved to have several undesirable properties. The S-score performed better, though it too exhibited some disappointing qualities.  相似文献   

9.
Causal inference methods have been developed for longitudinal observational study designs where confounding is thought to occur over time. In particular, one may estimate and contrast the population mean counterfactual outcome under specific exposure patterns. In such contexts, confounders of the longitudinal treatment-outcome association are generally identified using domain-specific knowledge. However, this may leave an analyst with a large set of potential confounders that may hinder estimation. Previous approaches to data-adaptive model selection for this type of causal parameter were limited to the single time-point setting. We develop a longitudinal extension of a collaborative targeted minimum loss-based estimation (C-TMLE) algorithm that can be applied to perform variable selection in the models for the probability of treatment with the goal of improving the estimation of the population mean counterfactual outcome under a fixed exposure pattern. We investigate the properties of this method through a simulation study, comparing it to G-Computation and inverse probability of treatment weighting. We then apply the method in a real-data example to evaluate the safety of trimester-specific exposure to inhaled corticosteroids during pregnancy in women with mild asthma. The data for this study were obtained from the linkage of electronic health databases in the province of Quebec, Canada. The C-TMLE covariate selection approach allowed for a reduction of the set of potential confounders, which included baseline and longitudinal variables.  相似文献   

10.
This paper focuses on the problems of estimation and variable selection in the functional linear regression model (FLM) with functional response and scalar covariates. To this end, two different types of regularization (L1 and L2) are considered in this paper. On the one hand, a sample approach for functional LASSO in terms of basis representation of the sample values of the response variable is proposed. On the other hand, we propose a penalized version of the FLM by introducing a P-spline penalty in the least squares fitting criterion. But our aim is to propose P-splines as a powerful tool simultaneously for variable selection and functional parameters estimation. In that sense, the importance of smoothing the response variable before fitting the model is also studied. In summary, penalized (L1 and L2) and nonpenalized regression are combined with a presmoothing of the response variable sample curves, based on regression splines or P-splines, providing a total of six approaches to be compared in two simulation schemes. Finally, the most competitive approach is applied to a real data set based on the graft-versus-host disease, which is one of the most frequent complications (30% –50%) in allogeneic hematopoietic stem-cell transplantation.  相似文献   

11.
In many settings, including oncology, increasing the dose of treatment results in both increased efficacy and toxicity. With the increasing availability of validated biomarkers and prediction models, there is the potential for individualized dosing based on patient specific factors. We consider the setting where there is an existing dataset of patients treated with heterogenous doses and including binary efficacy and toxicity outcomes and patient factors such as clinical features and biomarkers. The goal is to analyze the data to estimate an optimal dose for each (future) patient based on their clinical features and biomarkers. We propose an optimal individualized dose finding rule by maximizing utility functions for individual patients while limiting the rate of toxicity. The utility is defined as a weighted combination of efficacy and toxicity probabilities. This approach maximizes overall efficacy at a prespecified constraint on overall toxicity. We model the binary efficacy and toxicity outcomes using logistic regression with dose, biomarkers and dose–biomarker interactions. To incorporate the large number of potential parameters, we use the LASSO method. We additionally constrain the dose effect to be non-negative for both efficacy and toxicity for all patients. Simulation studies show that the utility approach combined with any of the modeling methods can improve efficacy without increasing toxicity relative to fixed dosing. The proposed methods are illustrated using a dataset of patients with lung cancer treated with radiation therapy.  相似文献   

12.
Sequential Randomized Controlled Trials (SRCTs) are rapidly becoming essential tools in the search for optimized treatment regimes in ongoing treatment settings. Analyzing data for multiple time-point treatments with a view toward optimal treatment regimes is of interest in many types of afflictions: HIV infection, Attention Deficit Hyperactivity Disorder in children, leukemia, prostate cancer, renal failure, and many others. Methods for analyzing data from SRCTs exist but they are either inefficient or suffer from the drawbacks of estimating equation methodology. We describe an estimation procedure, targeted maximum likelihood estimation (TMLE), which has been fully developed and implemented in point treatment settings, including time to event outcomes, binary outcomes and continuous outcomes. Here we develop and implement TMLE in the SRCT setting. As in the former settings, the TMLE procedure is targeted toward a pre-specified parameter of the distribution of the observed data, and thereby achieves important bias reduction in estimation of that parameter. As with the so-called Augmented Inverse Probability of Censoring Weight (A-IPCW) estimator, TMLE is double-robust and locally efficient. We report simulation results corresponding to two data-generating distributions from a longitudinal data structure.  相似文献   

13.
14.
A dynamic treatment regime (DTR) is a sequence of decision rules that provide guidance on how to treat individuals based on their static and time-varying status. Existing observational data are often used to generate hypotheses about effective DTRs. A common challenge with observational data, however, is the need for analysts to consider “restrictions” on the treatment sequences. Such restrictions may be necessary for settings where (1) one or more treatment sequences that were offered to individuals when the data were collected are no longer considered viable in practice, (2) specific treatment sequences are no longer available, or (3) the scientific focus of the analysis concerns a specific type of treatment sequences (eg, “stepped-up” treatments). To address this challenge, we propose a restricted tree–based reinforcement learning (RT-RL) method that searches for an interpretable DTR with the maximum expected outcome, given a (set of) user-specified restriction(s), which specifies treatment options (at each stage) that ought not to be considered as part of the estimated tree-based DTR. In simulations, we evaluate the performance of RT-RL versus the standard approach of ignoring the partial data for individuals not following the (set of) restriction(s). The method is illustrated using an observational data set to estimate a two-stage stepped-up DTR for guiding the level of care placement for adolescents with substance use disorder.  相似文献   

15.
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero‐inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation‐maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open‐source R package mpath .  相似文献   

16.
In many clinical trials and evaluations using medical care administrative databases it is of interest to estimate not only the survival time of a given treatment modality but also the total associated cost. The most widely used estimator for data subject to censoring is the Kaplan-Meier (KM) or product-limit (PL) estimator. The optimality properties of this estimator applied to time-to-event data (consistency, etc.) under the assumptions of random censorship have been established. However, whenever the relationship between cost and survival time includes an error term to account for random differences among patients' costs, the dependency between cumulative treatment cost at the time of censoring and at the survival time results in KM giving biased estimates. A similar phenomenon has previously been noted in the context of estimating quality-adjusted survival time. We propose an estimator for mean cost which exploits the underlying relationship between total treatment cost and survival time. The proposed method utilizes either parametric or nonparametric regression to estimate this relationship and is consistent when this relationship is consistently estimated. We then present simulation results which illustrate the gain in finite-sample efficiency when compared with another recently proposed estimator. The methods are then applied to the estimation of mean cost for two studies where right-censoring was present. The first is the heart failure clinical trial Studies of Left Ventricular Dysfunction (SOLVD). The second is a Health Maintenance Organization (HMO) database study of the cost of ulcer treatment.  相似文献   

17.
Recent statistical methodology for precision medicine has focused on either identification of subgroups with enhanced treatment effects or estimating optimal treatment decision rules so that treatment is allocated in a way that maximizes, on average, predefined patient outcomes. Less attention has been given to subgroup testing, which involves evaluation of whether at least a subgroup of the population benefits from an investigative treatment, compared to some control or standard of care. In this work, we propose a general framework for testing for the existence of a subgroup with enhanced treatment effects based on the difference of the estimated value functions under an estimated optimal treatment regime and a fixed regime that assigns everyone to the same treatment. Our proposed test does not require specification of the parametric form of the subgroup and allows heterogeneous treatment effects within the subgroup. The test applies to cases when the outcome of interest is either a time-to-event or a (uncensored) scalar, and is valid at the exceptional law. To demonstrate the empirical performance of the proposed test, we study the type I error and power of the test statistics in simulations and also apply our test to data from a Phase III trial in patients with hematological malignancies.  相似文献   

18.
A dynamic model of leaf photosynthesis for C3 plants has been developed for examination of the role of the dynamic properties of the photosynthetic apparatus in regulating CO2 assimilation in variable light regimes. The model is modified from the Farquhar-von Caemmerer-Berry model by explicitly including metabolite pools and the effects of light activation and deactivation of Calvin cycle enzymes. It is coupled to a dynamic stomatal conductance model, with the assimilation rate at any time being determined by the joint effects of the dynamic biochemical model and the stomatal conductance model on the intercellular CO2 pressure. When parametrized for each species, the model was shown to exhibit responses to step changes in photon flux density that agreed closely with the observed responses for both the understory plant Alocasia macrorrhiza and the crop plant Glycine max. Comparisons of measured and simulated photosynthesis under simulated light regimes having natural patterns of lightfleck frequencies and durations showed that the simulated total for Alocasia was within ±4% of the measured total assimilation, but that both were 12–50% less than the predictions from a steady–state solution of the model. Agreement was within ±10% for Glycine max, and only small differences were apparent between the dynamic and steady–state predictions. The model may therefore be parametrized for quite different species, and is shown to reflect more accurately the dynamics of photosynthesis than earlier dynamic models.  相似文献   

19.
In biomedical science, analyzing treatment effect heterogeneity plays an essential role in assisting personalized medicine. The main goals of analyzing treatment effect heterogeneity include estimating treatment effects in clinically relevant subgroups and predicting whether a patient subpopulation might benefit from a particular treatment. Conventional approaches often evaluate the subgroup treatment effects via parametric modeling and can thus be susceptible to model mis-specifications. In this paper, we take a model-free semiparametric perspective and aim to efficiently evaluate the heterogeneous treatment effects of multiple subgroups simultaneously under the one-step targeted maximum-likelihood estimation (TMLE) framework. When the number of subgroups is large, we further expand this path of research by looking at a variation of the one-step TMLE that is robust to the presence of small estimated propensity scores in finite samples. From our simulations, our method demonstrates substantial finite sample improvements compared to conventional methods. In a case study, our method unveils the potential treatment effect heterogeneity of rs12916-T allele (a proxy for statin usage) in decreasing Alzheimer's disease risk.  相似文献   

20.
Molecular markers have proved extremely useful in resolving mating patterns within individual populations of a number of species, but little is known about how genetic mating systems might vary geographically within a species. Here we use microsatellite markers to compare patterns of sneaked fertilization and mating success in two populations of sand goby (Pomatoschistus minutus) that differ dramatically with respect to nest‐site density and the documented nature and intensity of sexual selection. At the Tvärminne site in the Baltic Sea, the microsatellite genotypes of 17 nest‐tending males and mean samples of more than 50 progeny per nest indicated that approximately 35% of the nests contained eggs that had been fertilized by sneaker males. Successful nest holders mated with an average of 3.0 females, and the distribution of mate numbers for these males did not differ significantly from the Poisson expectation. These genetically deduced mating‐system parameters in the Tvärminne population are remarkably similar to those in sand gobies at a distant site adjoining the North Sea. Thus, pronounced differences in the ecological setting and sexual selection regimes in these two populations have not translated into evident differences in cuckoldry rates or other monitored patterns of male mating success. In this case, the ecological setting appears not to be predictive of alternative male mating strategies, a finding of relevance to sexual selection theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号