首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In observational studies, subjects are often nested within clusters. In medical studies, patients are often treated by doctors and therefore patients are regarded as nested or clustered within doctors. A concern that arises with clustered data is that cluster-level characteristics (e.g., characteristics of the doctor) are associated with both treatment selection and patient outcomes, resulting in cluster-level confounding. Measuring and modeling cluster attributes can be difficult and statistical methods exist to control for all unmeasured cluster characteristics. An assumption of these methods however is that characteristics of the cluster and the effects of those characteristics on the outcome (as well as probability of treatment assignment when using covariate balancing methods) are constant over time. In this paper, we consider methods that relax this assumption and allow for estimation of treatment effects in the presence of unmeasured time-dependent cluster confounding. The methods are based on matching with the propensity score and incorporate unmeasured time-specific cluster effects by performing matching within clusters or using fixed- or random-cluster effects in the propensity score model. The methods are illustrated using data to compare the effectiveness of two total hip devices with respect to survival of the device and a simulation study is performed that compares the proposed methods. One method that was found to perform well is matching within surgeon clusters partitioned by time. Considerations in implementing the proposed methods are discussed.  相似文献   

2.
Ming K  Rosenbaum PR 《Biometrics》2000,56(1):118-124
In observational studies that match several controls to each treated subject, substantially greater bias reduction is possible if the number of controls is not fixed but rather is allowed to vary from one matched set to another. In certain cases, matching with a fixed number of controls may remove only 50% of the bias in a covariate, whereas matching with a variable number of controls may remove 90% of the bias, even though both control groups have the same number of controls in total. An example of matching in a study of surgical mortality is discussed in detail.  相似文献   

3.
4.
Propensity score methods are used to estimate a treatment effect with observational data. This paper considers the formation of propensity score subclasses by investigating different methods for determining subclass boundaries and the number of subclasses used. We compare several methods: balancing a summary of the observed information matrix and equal-frequency subclasses. Subclasses that balance the inverse variance of the treatment effect reduce the mean squared error of the estimates and maximize the number of usable subclasses.  相似文献   

5.
Observational studies frequently are conducted to compare long-term effects of treatments. Without randomization, patients receiving one treatment are not guaranteed to be prognostically comparable to those receiving another treatment. Furthermore, the response of interest may be right-censored because of incomplete follow-up. Statistical methods that do not account for censoring and confounding may lead to biased estimates. This article presents a method for estimating treatment effects in nonrandomized studies with right-censored responses. We review the assumptions required to estimate average causal effects and derive an estimator for comparing two treatments by applying inverse weights to the complete cases. The weights are determined according to the estimated probability of receiving treatment conditional on covariates and the estimated treatment-specific censoring distribution. By utilizing martingale representations, the estimator is shown to be asymptotically normal and an estimator for the asymptotic variance is derived. Simulation results are presented to evaluate the properties of the estimator. These methods are applied to an observational data set of acute coronary syndrome patients from Duke University Medical Center to estimate the effect of a treatment strategy on the mean 5-year medical cost.  相似文献   

6.
This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right‐censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit‐of‐detection. For randomly censored covariates, an often‐used method is the inefficient complete‐case analysis (CCA) which consists in deleting censored observations in the data analysis. When censoring is not completely independent, the CCA leads to biased and spurious results. Methods for missing covariate data, including type I and type II covariate censoring as well as limit‐of‐detection do not readily apply due to the fundamentally different nature of randomly censored covariates. We develop a novel method for censored covariates using a conditional mean imputation based on either Kaplan–Meier estimates or a Cox proportional hazards model to estimate the effects of these covariates on a time‐to‐event outcome. We evaluate the performance of the proposed method through simulation studies and show that it provides good bias reduction and statistical efficiency. Finally, we illustrate the method using data from the Framingham Heart Study to assess the relationship between offspring and parental age of onset of cardiovascular events.  相似文献   

7.
Summary Cluster randomization trials with relatively few clusters have been widely used in recent years for evaluation of health‐care strategies. On average, randomized treatment assignment achieves balance in both known and unknown confounding factors between treatment groups, however, in practice investigators can only introduce a small amount of stratification and cannot balance on all the important variables simultaneously. The limitation arises especially when there are many confounding variables in small studies. Such is the case in the INSTINCT trial designed to investigate the effectiveness of an education program in enhancing the tPA use in stroke patients. In this article, we introduce a new randomization design, the balance match weighted (BMW) design, which applies the optimal matching with constraints technique to a prospective randomized design and aims to minimize the mean squared error (MSE) of the treatment effect estimator. A simulation study shows that, under various confounding scenarios, the BMW design can yield substantial reductions in the MSE for the treatment effect estimator compared to a completely randomized or matched‐pair design. The BMW design is also compared with a model‐based approach adjusting for the estimated propensity score and Robins‐Mark‐Newey E‐estimation procedure in terms of efficiency and robustness of the treatment effect estimator. These investigations suggest that the BMW design is more robust and usually, although not always, more efficient than either of the approaches. The design is also seen to be robust against heterogeneous error. We illustrate these methods in proposing a design for the INSTINCT trial.  相似文献   

8.
A method for reducing bias in observational studies proposed by ROSENBAUM and RUBIN (1983, 1984) is discussed with a view to applications in studies designed to compare two treatments. The data are stratified on a function of covariates, called the propensity score. The propensity score is the conditional probability of receiving a specific treatment given a set of observed covariates. Some insight into how this kind of stratification works in theory is given. Within strata, the treatment groups are comparable with respect to the distribution of covariates incorporated into the score, hence a corresponding stratified analysis can be considered. The method is different from other strategies in that the sub-classes are not intended to comprise patients with similar prognosis. In practice, estimated grouped scores are used. Problems concerning the interpretation of the proposed stratified approach are illustrated by an application in oncology, and the results are compared to those from an analysis in a standard regression model.  相似文献   

9.
Zhao and Tsiatis (1997) consider the problem of estimation of the distribution of the quality-adjusted lifetime when the chronological survival time is subject to right censoring. The quality-adjusted lifetime is typically defined as a weighted sum of the times spent in certain states up until death or some other failure time. They propose an estimator and establish the relevant asymptotics under the assumption of independent censoring. In this paper we extend the data structure with a covariate process observed until the end of follow-up and identify the optimal estimation problem. Because of the curse of dimensionality, no globally efficient nonparametric estimators, which have a good practical performance at moderate sample sizes, exist. Given a correctly specified model for the hazard of censoring conditional on the observed quality-of-life and covariate processes, we propose a closed-form one-step estimator of the distribution of the quality-adjusted lifetime whose asymptotic variance attains the efficiency bound if we can correctly specify a lower-dimensional working model for the conditional distribution of quality-adjusted lifetime given the observed quality-of-life and covariate processes. The estimator remains consistent and asymptotically normal even if this latter submodel is misspecified. The practical performance of the estimators is illustrated with a simulation study. We also extend our proposed one-step estimator to the case where treatment assignment is confounded by observed risk factors so that this estimator can be used to test a treatment effect in an observational study.  相似文献   

10.

Background

Long-acting beta-agonists were one of the first-choice bronchodilator agents for stable chronic obstructive pulmonary disease. But the impact of long-acting beta-agonists on mortality was not well investigated.

Methods

National Emphysema Treatment Trial provided the data. Severe and very severe stable chronic obstructive pulmonary disease patients who were eligible for volume reduction surgery were recruited at 17 clinical centers in United States during 1988–2002. We used the 6–10 year follow-up data of patients randomized to non-surgery treatment. Hazard ratios for death by long-acting beta-agonists were estimated by three models using Cox proportional hazard analysis and propensity score matching were measured.

Results

The pre-matching cohort was comprised of 591 patients (50.6% were administered long-acting beta-agonists. Age: 66.6 ± 5.3 year old. Female: 35.4%. Forced expiratory volume in one second (%predicted): 26.7 ± 7.1%. Mortality during follow-up: 70.2%). Hazard ratio using a multivariate Cox model in the pre-matching cohort was 0.77 (P = 0.010). Propensity score matching was conducted (C-statics: 0.62. No parameter differed between cohorts). The propensity-matched cohort was comprised of 492 patients (50.0% were administered long-acting beta-agonists. Age: 66.8 ± 5.1 year old. Female: 34.8%. Forced expiratory volume in one second (%predicted) 26.5 ± 6.8%. Mortality during follow-up: 69.1%). Hazard ratio using a univariate Cox model in the propensity-matched cohort was 0.77 (P = 0.017). Hazard ratio using a multivariate Cox model in the propensity-matched cohort was 0.76 (P = 0.011).

Conclusions

Long-acting beta-agonists reduce mortality of severe and very severe chronic obstructive pulmonary disease patients.  相似文献   

11.
12.
13.
14.
15.
Kaitlyn Cook  Wenbin Lu  Rui Wang 《Biometrics》2023,79(3):1670-1685
The Botswana Combination Prevention Project was a cluster-randomized HIV prevention trial whose follow-up period coincided with Botswana's national adoption of a universal test and treat strategy for HIV management. Of interest is whether, and to what extent, this change in policy modified the preventative effects of the study intervention. To address such questions, we adopt a stratified proportional hazards model for clustered interval-censored data with time-dependent covariates and develop a composite expectation maximization algorithm that facilitates estimation of model parameters without placing parametric assumptions on either the baseline hazard functions or the within-cluster dependence structure. We show that the resulting estimators for the regression parameters are consistent and asymptotically normal. We also propose and provide theoretical justification for the use of the profile composite likelihood function to construct a robust sandwich estimator for the variance. We characterize the finite-sample performance and robustness of these estimators through extensive simulation studies. Finally, we conclude by applying this stratified proportional hazards model to a re-analysis of the Botswana Combination Prevention Project, with the national adoption of a universal test and treat strategy now modeled as a time-dependent covariate.  相似文献   

16.
17.
18.
Multivariable model building for propensity score modeling approaches is challenging. A common propensity score approach is exposure-driven propensity score matching, where the best model selection strategy is still unclear. In particular, the situation may require variable selection, while it is still unclear if variables included in the propensity score should be associated with the exposure and the outcome, with either the exposure or the outcome, with at least the exposure or with at least the outcome. Unmeasured confounders, complex correlation structures, and non-normal covariate distributions further complicate matters. We consider the performance of different modeling strategies in a simulation design with a complex but realistic structure and effects on a binary outcome. We compare the strategies in terms of bias and variance in estimated marginal exposure effects. Considering the bias in estimated marginal exposure effects, the most reliable results for estimating the propensity score are obtained by selecting variables related to the exposure. On average this results in the least bias and does not greatly increase variances. Although our results cannot be generalized, this provides a counterexample to existing recommendations in the literature based on simple simulation settings. This highlights that recommendations obtained in simple simulation settings cannot always be generalized to more complex, but realistic settings and that more complex simulation studies are needed.  相似文献   

19.
Chen J  Chatterjee N 《Biometrics》2006,62(1):28-35
Genetic epidemiologic studies often collect genotype data at multiple loci within a genomic region of interest from a sample of unrelated individuals. One popular method for analyzing such data is to assess whether haplotypes, i.e., the arrangements of alleles along individual chromosomes, are associated with the disease phenotype or not. For many study subjects, however, the exact haplotype configuration on the pair of homologous chromosomes cannot be derived with certainty from the available locus-specific genotype data (phase ambiguity). In this article, we consider estimating haplotype-specific association parameters in the Cox proportional hazards model, using genotype, environmental exposure, and the disease endpoint data collected from cohort or nested case-control studies. We study alternative Expectation-Maximization algorithms for estimating haplotype frequencies from cohort and nested case-control studies. Based on a hazard function of the disease derived from the observed genotype data, we then propose a semiparametric method for joint estimation of relative-risk parameters and the cumulative baseline hazard function. The method is greatly simplified under a rare disease assumption, for which an asymptotic variance estimator is also proposed. The performance of the proposed estimators is assessed via simulation studies. An application of the proposed method is presented, using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study.  相似文献   

20.
Many flexible extensions of the Cox proportional hazards model incorporate time-dependent (TD) and/or nonlinear (NL) effects of time-invariant covariates. In contrast, little attention has been given to the assessment of such effects for continuous time-varying covariates (TVCs). We propose a flexible regression B-spline–based model for TD and NL effects of a TVC. To account for sparse TVC measurements, we added to this model the effect of time elapsed since last observation (TEL), which acts as an effect modifier. TD, NL, and TEL effects are estimated with the iterative alternative conditional estimation algorithm. Furthermore, a simulation extrapolation (SIMEX)-like procedure was adapted to correct the estimated effects for random measurement errors in the observed TVC values. In simulations, TD and NL estimates were unbiased if the TVC was measured with a high frequency. With sparse measurements, the strength of the effects was underestimated but the TEL estimate helped reduce the bias, whereas SIMEX helped further to correct for bias toward the null due to “white noise” measurement errors. We reassessed the effects of systolic blood pressure (SBP) and total cholesterol, measured at two-year intervals, on cardiovascular risks in women participating in the Framingham Heart Study. Accounting for TD effects of SBP, cholesterol and age, the NL effect of cholesterol, and the TEL effect of SBP improved substantially the model's fit to data. Flexible estimates yielded clinically important insights regarding the role of these risk factors. These results illustrate the advantages of flexible modeling of TVC effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号