首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Many estimators of the average effect of a treatment on an outcome require estimation of the propensity score, the outcome regression, or both. It is often beneficial to utilize flexible techniques, such as semiparametric regression or machine learning, to estimate these quantities. However, optimal estimation of these regressions does not necessarily lead to optimal estimation of the average treatment effect, particularly in settings with strong instrumental variables. A recent proposal addressed these issues via the outcome-adaptive lasso, a penalized regression technique for estimating the propensity score that seeks to minimize the impact of instrumental variables on treatment effect estimators. However, a notable limitation of this approach is that its application is restricted to parametric models. We propose a more flexible alternative that we call the outcome highly adaptive lasso. We discuss the large sample theory for this estimator and propose closed-form confidence intervals based on the proposed estimator. We show via simulation that our method offers benefits over several popular approaches.  相似文献   

2.
In this paper, we investigate K‐group comparisons on survival endpoints for observational studies. In clinical databases for observational studies, treatment for patients are chosen with probabilities varying depending on their baseline characteristics. This often results in noncomparable treatment groups because of imbalance in baseline characteristics of patients among treatment groups. In order to overcome this issue, we conduct propensity analysis and match the subjects with similar propensity scores across treatment groups or compare weighted group means (or weighted survival curves for censored outcome variables) using the inverse probability weighting (IPW). To this end, multinomial logistic regression has been a popular propensity analysis method to estimate the weights. We propose to use decision tree method as an alternative propensity analysis due to its simplicity and robustness. We also propose IPW rank statistics, called Dunnett‐type test and ANOVA‐type test, to compare 3 or more treatment groups on survival endpoints. Using simulations, we evaluate the finite sample performance of the weighted rank statistics combined with these propensity analysis methods. We demonstrate these methods with a real data example. The IPW method also allows us for unbiased estimation of population parameters of each treatment group. In this paper, we limit our discussions to survival outcomes, but all the methods can be easily modified for any type of outcomes, such as binary or continuous variables.  相似文献   

3.
Chen PY  Tsiatis AA 《Biometrics》2001,57(4):1030-1038
When comparing survival times between two treatment groups, it may be more appropriate to compare the restricted mean lifetime, i.e., the expectation of lifetime restricted to a time L, rather than mean lifetime in order to accommodate censoring. When the treatments are not assigned to patients randomly, as in observational studies, we also need to account for treatment imbalances in confounding factors. In this article, we propose estimators for the difference of the restricted mean lifetime between two groups that account for treatment imbalances in prognostic factors assuming a proportional hazards relationship. Large-sample properties of our estimators based on martingale theory for counting processes are also derived. Simulation studies were conducted to compare these estimators and to assess the adequacy of the large-sample approximations. Our methods are also applied to an observational database of acute coronary syndrome patients from Duke University Medical Center to estimate the treatment effect on the restricted mean lifetime over 5 years.  相似文献   

4.
Cong XJ  Yin G  Shen Y 《Biometrics》2007,63(3):663-672
We consider modeling correlated survival data when cluster sizes may be informative to the outcome of interest based on a within-cluster resampling (WCR) approach and a weighted score function (WSF) method. We derive the large sample properties for the WCR estimators under the Cox proportional hazards model. We establish consistency and asymptotic normality of the regression coefficient estimators, and the weak convergence property of the estimated baseline cumulative hazard function. The WSF method is to incorporate the inverse of cluster sizes as weights in the score function. We conduct simulation studies to assess and compare the finite-sample behaviors of the estimators and apply the proposed methods to a dental study as an illustration.  相似文献   

5.
The methodological development of this paper is motivated by a common problem in econometrics where we are interested in estimating the difference in the average expenditures between two populations, say with and without a disease, as a function of the covariates. For example, let Y(1) and Y(2) be two non-negative random variables denoting the health expenditures for cases and controls. Smooth Quantile Ratio Estimation (SQUARE) is a novel approach for estimating Delta=E[Y(1)] - E[Y(2)] by smoothing across percentiles the log-transformed ratio of the two quantile functions. Dominici et al. (2005) have shown that SQUARE defines a large class of estimators of Delta, is more efficient than common parametric and nonparametric estimators of Delta, and is consistent and asymptotically normal. However, in applications it is often desirable to estimate Delta(x)=E[Y(1)|x]--E[Y(2)|x], that is, the difference in means as a function of x. In this paper we extend SQUARE to a regression model and we introduce a two-part regression SQUARE for estimating Delta(x) as a function of x. We use the first part of the model to estimate the probability of incurring any costs and the second part of the model to estimate the mean difference in health expenditures, given that a nonzero cost is observed. In the second part of the model, we apply the basic definition of SQUARE for positive costs to compare expenditures for the cases and controls having 'similar' covariate profiles. We determine strata of cases and control with 'similar' covariate profiles by the use of propensity score matching. We then apply two-part regression SQUARE to the 1987 National Medicare Expenditure Survey to estimate the difference Delta(x) between persons suffering from smoking-attributable diseases and persons without these diseases as a function of the propensity of getting the disease. Using a simulation study, we compare frequentist properties of two-part regression SQUARE with maximum likelihood estimators for the log-transformed expenditures.  相似文献   

6.
Datta S  Satten GA 《Biometrics》2002,58(4):792-802
We propose nonparametric estimators of the stage occupation probabilities and transition hazards for a multistage system that is not necessarily Markovian, using data that are subject to dependent right censoring. We assume that the hazard of being censored at a given instant depends on a possibly time-dependent covariate process as opposed to assuming a fixed censoring hazard (independent censoring). The estimator of the integrated transition hazard matrix has a Nelson-Aalen form where each of the counting processes counting the number of transitions between states and the risk sets for leaving each stage have an IPCW (inverse probability of censoring weighted) form. We estimate these weights using Aalen's linear hazard model. Finally, the stage occupation probabilities are obtained from the estimated integrated transition hazard matrix via product integration. Consistency of these estimators under the general paradigm of non-Markov models is established and asymptotic variance formulas are provided. Simulation results show satisfactory performance of these estimators. An analysis of data on graft-versus-host disease for bone marrow transplant patients is used as an illustration.  相似文献   

7.
8.
In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.  相似文献   

9.
Huang Y  Leroux B 《Biometrics》2011,67(3):843-851
Summary Williamson, Datta, and Satten's (2003, Biometrics 59 , 36–42) cluster‐weighted generalized estimating equations (CWGEEs) are effective in adjusting for bias due to informative cluster sizes for cluster‐level covariates. We show that CWGEE may not perform well, however, for covariates that can take different values within a cluster if the numbers of observations at each covariate level are informative. On the other hand, inverse probability of treatment weighting accounts for informative treatment propensity but not for informative cluster size. Motivated by evaluating the effect of a binary exposure in presence of such types of informativeness, we propose several weighted GEE estimators, with weights related to the size of a cluster as well as the distribution of the binary exposure within the cluster. Choice of the weights depends on the population of interest and the nature of the exposure. Through simulation studies, we demonstrate the superior performance of the new estimators compared to existing estimators such as from GEE, CWGEE, and inverse probability of treatment‐weighted GEE. We demonstrate the use of our method using an example examining covariate effects on the risk of dental caries among small children.  相似文献   

10.
Analysts often estimate treatment effects in observational studies using propensity score matching techniques. When there are missing covariate values, analysts can multiply impute the missing data to create m completed data sets. Analysts can then estimate propensity scores on each of the completed data sets, and use these to estimate treatment effects. However, there has been relatively little attention on developing imputation models to deal with the additional problem of missing treatment indicators, perhaps due to the consequences of generating implausible imputations. However, simply ignoring the missing treatment values, akin to a complete case analysis, could also lead to problems when estimating treatment effects. We propose a latent class model to multiply impute missing treatment indicators. We illustrate its performance through simulations and with data taken from a study on determinants of children's cognitive development. This approach is seen to obtain treatment effect estimates closer to the true treatment effect than when employing conventional imputation procedures as well as compared to a complete case analysis.  相似文献   

11.
Liu D  Zhou XH 《Biometrics》2011,67(3):906-916
Covariate-specific receiver operating characteristic (ROC) curves are often used to evaluate the classification accuracy of a medical diagnostic test or a biomarker, when the accuracy of the test is associated with certain covariates. In many large-scale screening tests, the gold standard is subject to missingness due to high cost or harmfulness to the patient. In this article, we propose a semiparametric estimation of the covariate-specific ROC curves with a partial missing gold standard. A location-scale model is constructed for the test result to model the covariates' effect, but the residual distributions are left unspecified. Thus the baseline and link functions of the ROC curve both have flexible shapes. With the gold standard missing at random (MAR) assumption, we consider weighted estimating equations for the location-scale parameters, and weighted kernel estimating equations for the residual distributions. Three ROC curve estimators are proposed and compared, namely, imputation-based, inverse probability weighted, and doubly robust estimators. We derive the asymptotic normality of the estimated ROC curve, as well as the analytical form of the standard error estimator. The proposed method is motivated and applied to the data in an Alzheimer's disease research.  相似文献   

12.
For randomized clinical trials where the endpoint of interest is a time-to-event subject to censoring, estimating the treatment effect has mostly focused on the hazard ratio from the Cox proportional hazards model. Since the model’s proportional hazards assumption is not always satisfied, a useful alternative, the so-called additive hazards model, may instead be used to estimate a treatment effect on the difference of hazard functions. Still, the hazards difference may be difficult to grasp intuitively, particularly in a clinical setting of, e.g., patient counseling, or resource planning. In this paper, we study the quantiles of a covariate’s conditional survival function in the additive hazards model. Specifically, we estimate the residual time quantiles, i.e., the quantiles of survival times remaining at a given time t, conditional on the survival times greater than t, for a specific covariate in the additive hazards model. We use the estimates to translate the hazards difference into the difference in residual time quantiles, which allows a more direct clinical interpretation. We determine the asymptotic properties, assess the performance via Monte-Carlo simulations, and demonstrate the use of residual time quantiles in two real randomized clinical trials.  相似文献   

13.
Huang Y 《Biometrics》1999,55(4):1108-1113
Induced dependent censorship is a general phenomenon in health service evaluation studies in which a measure such as quality-adjusted survival time or lifetime medical cost is of interest. We investigate the two-sample problem and propose two classes of nonparametric tests. Based on consistent estimation of the survival function for each sample, the two classes of test statistics examine the cumulative weighted difference in hazard functions and in survival functions. We derive a unified asymptotic null distribution theory and inference procedure. The tests are applied to trial V of the International Breast Cancer Study Group and show that long duration chemotherapy significantly improves time without symptoms of disease and toxicity of treatment as compared with the short duration treatment. Simulation studies demonstrate that the proposed tests, with a wide range of weight choices, perform well under moderate sample sizes.  相似文献   

14.
Analysis with time-to-event data in clinical and epidemiological studies often encounters missing covariate values, and the missing at random assumption is commonly adopted, which assumes that missingness depends on the observed data, including the observed outcome which is the minimum of survival and censoring time. However, it is conceivable that in certain settings, missingness of covariate values is related to the survival time but not to the censoring time. This is especially so when covariate missingness is related to an unmeasured variable affected by the patient's illness and prognosis factors at baseline. If this is the case, then the covariate missingness is not at random as the survival time is censored, and it creates a challenge in data analysis. In this article, we propose an approach to deal with such survival-time-dependent covariate missingness based on the well known Cox proportional hazard model. Our method is based on inverse propensity weighting with the propensity estimated by nonparametric kernel regression. Our estimators are consistent and asymptotically normal, and their finite-sample performance is examined through simulation. An application to a real-data example is included for illustration.  相似文献   

15.
Causal inference has been increasingly reliant on observational studies with rich covariate information. To build tractable causal procedures, such as the doubly robust estimators, it is imperative to first extract important features from high or even ultra-high dimensional data. In this paper, we propose causal ball screening for confounder selection from modern ultra-high dimensional data sets. Unlike the familiar task of variable selection for prediction modeling, our confounder selection procedure aims to control for confounding while improving efficiency in the resulting causal effect estimate. Previous empirical and theoretical studies suggest excluding causes of the treatment that are not confounders. Motivated by these results, our goal is to keep all the predictors of the outcome in both the propensity score and outcome regression models. A distinctive feature of our proposal is that we use an outcome model-free procedure for propensity score model selection, thereby maintaining double robustness in the resulting causal effect estimator. Our theoretical analyses show that the proposed procedure enjoys a number of properties, including model selection consistency and pointwise normality. Synthetic and real data analysis show that our proposal performs favorably with existing methods in a range of realistic settings. Data used in preparation of this paper were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.  相似文献   

16.

Objectives

A previous population-based study reported an increased risk of stroke after the occurrence of adhesive capsulitis of the shoulder (ACS), but there were substantial imbalances in the distribution of age and pre-existing vascular risk factors between subjects with ACS and without ACS, which might lead to a confounded association between ACS and stroke. The purpose of the present large-scale propensity score-matched population-based follow-up study was to clarify whether there is an increased stroke risk after ACS.

Methods

We used a logistic regression model that includes age, sex, pre-existing comorbidities and socioeconomic status as covariates to compute the propensity score. A total of 22025 subjects with at least two ambulatory visits with the principal diagnosis of ACS in 2001 was enrolled in the ACS group. The non-ACS group consisted of 22025, propensity score-matched subjects without ACS. The stroke-free survival curves for these 2 groups were compared using the Kaplan-Meier method. Stratified Cox proportional hazard regression with patients matched on propensity score was used to estimate the effect of ACS on the occurrence of stroke.

Results

During the two-year follow-up period, 657 subjects in the ACS group (2.98%) and 687 in the non-ACS group (3.12%) developed stroke. The hazard ratio (HR) of stroke for the ACS group was 0.93 compared to the non-ACS group (95% confidence interval [CI], 0.83–1.04, P = 0.1778). There was no statistically significant difference in stroke subtype distribution between the two groups (P = 0.2114).

Conclusions

These findings indicate that ACS itself is not associated with an increased risk of subsequent stroke.  相似文献   

17.
In clinical settings, the necessity of treatment is often measured in terms of the patient’s prognosis in the absence of treatment. Along these lines, it is often of interest to compare subgroups of patients (e.g., based on underlying diagnosis) with respect to pre-treatment survival. Such comparisons may be complicated by at least two important issues. First, mortality contrasts by subgroup may differ over follow-up time, as opposed to being constant, and may follow a form that is difficult to model parametrically. Moreover, in settings where the proportional hazards assumption fails, investigators tend to be more interested in cumulative (as opposed to instantaneous) effects on mortality. Second, pre-treatment death is censored by the receipt of treatment and in settings where treatment assignment depends on time-dependent factors that also affect mortality, such censoring is likely to be informative. We propose semiparametric methods for contrasting subgroup-specific cumulative mortality in the presence of dependent censoring. The proposed estimators are based on the cumulative hazard function, with pre-treatment mortality assumed to follow a stratified Cox model. No functional form is assumed for the nature of the non-proportionality. Asymptotic properties of the proposed estimators are derived, and simulation studies show that the proposed methods are applicable to practical sample sizes. The methods are then applied to contrast pre-transplant mortality for acute versus chronic End-Stage Liver Disease patients.  相似文献   

18.
Guo Y  Manatunga AK 《Biometrics》2009,65(1):125-134
Summary .  Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.  相似文献   

19.
BackgroundIn both observational and randomized studies, associations with overall survival are by and large assessed on a multiplicative scale using the Cox model. However, clinicians and clinical researchers have an ardent interest in assessing absolute benefit associated with treatments. In older patients, some studies have reported lower relative treatment effect, which might translate into similar or even greater absolute treatment effect given their high baseline hazard for clinical events.MethodsThe effect of treatment and the effect modification of treatment were respectively assessed using a multiplicative and an additive hazard model in an analysis adjusted for propensity score in the context of coronary surgery.ResultsThe multiplicative model yielded a lower relative hazard reduction with bilateral internal thoracic artery grafting in older patients (Hazard ratio for interaction/year = 1.03, 95%CI: 1.00 to 1.06, p = 0.05) whereas the additive model reported a similar absolute hazard reduction with increasing age (Delta for interaction/year = 0.10, 95%CI: -0.27 to 0.46, p = 0.61). The number needed to treat derived from the propensity score-adjusted multiplicative model was remarkably similar at the end of the follow-up in patients aged < = 60 and in patients >70.ConclusionsThe present example demonstrates that a lower treatment effect in older patients on a relative scale can conversely translate into a similar treatment effect on an additive scale due to large baseline hazard differences. Importantly, absolute risk reduction, either crude or adjusted, can be calculated from multiplicative survival models. We advocate for a wider use of the absolute scale, especially using additive hazard models, to assess treatment effect and treatment effect modification.  相似文献   

20.
Summary Recently meta‐analysis has been widely utilized to combine information across multiple studies to evaluate a common effect. Integrating data from similar studies is particularly useful in genomic studies where the individual study sample sizes are not large relative to the number of parameters of interest. In this article, we are interested in developing robust prognostic rules for the prediction of t ‐year survival based on multiple studies. We propose to construct a composite score for prediction by fitting a stratified semiparametric transformation model that allows the studies to have related but not identical outcomes. To evaluate the accuracy of the resulting score, we provide point and interval estimators for the commonly used accuracy measures including the time‐specific receiver operating characteristic curves, and positive and negative predictive values. We apply the proposed procedures to develop prognostic rules for the 5‐year survival of breast cancer patients based on five breast cancer genomic studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号