首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sun L  Kim YJ  Sun J 《Biometrics》2004,60(3):637-643
Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics 49, 13-22; Sun et al., 1999, Biometrics 55, 909-914; Pan, 2001, Biometrics 57, 1245-1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study.  相似文献   

2.
Sinha D  Chen MH  Ghosh SK 《Biometrics》1999,55(2):585-590
Interval-censored data occur in survival analysis when the survival time of each patient is only known to be within an interval and these censoring intervals differ from patient to patient. For such data, we present some Bayesian discretized semiparametric models, incorporating proportional and nonproportional hazards structures, along with associated statistical analyses and tools for model selection using sampling-based methods. The scope of these methodologies is illustrated through a reanalysis of a breast cancer data set (Finkelstein, 1986, Biometrics 42, 845-854) to test whether the effect of covariate on survival changes over time.  相似文献   

3.
Gustafson P 《Biometrics》2007,63(1):69-77
Yin and Ibrahim (2005a, Biometrics 61, 208-216) use a Box-Cox transformed hazard model to acknowledge uncertainty about how a linear predictor acts upon the hazard function of a failure-time response. Particularly, additive and proportional hazards models arise for particular values of the transformation parameter. As is often the case, however, this added model flexibility is obtained at the cost of lessened parameter interpretability. Particularly, the interpretation of the coefficients in the linear predictor is intertwined with the value of the transformation parameter. Moreover, some data sets contain very little information about this parameter. To shed light on the situation, we consider average effects based on averaging (over the joint distribution of the explanatory variables and the failure-time response) the partial derivatives of the hazard, or the log-hazard, with respect to the explanatory variables. First, we consider fitting models which do assume a particular form of covariate effects, for example, proportional hazards or additive hazards. In some such circumstances, average effects are seen to be inferential targets which are robust to the effect form being misspecified. Second, we consider average effects as targets of inference when using the transformed hazard model. We show that in addition to being more interpretable inferential targets, average effects can sometimes be estimated more efficiently than the corresponding regression coefficients.  相似文献   

4.
Right-truncated data arise when observations are ascertained retrospectively, and only subjects who experience the event of interest by the time of sampling are selected. Such a selection scheme, without adjustment, leads to biased estimation of covariate effects in the Cox proportional hazards model. The existing methods for fitting the Cox model to right-truncated data, which are based on the maximization of the likelihood or solving estimating equations with respect to both the baseline hazard function and the covariate effects, are numerically challenging. We consider two alternative simple methods based on inverse probability weighting (IPW) estimating equations, which allow consistent estimation of covariate effects under a positivity assumption and avoid estimation of baseline hazards. We discuss problems of identifiability and consistency that arise when positivity does not hold and show that although the partial tests for null effects based on these IPW methods can be used in some settings even in the absence of positivity, they are not valid in general. We propose adjusted estimating equations that incorporate the probability of observation when it is known from external sources, which results in consistent estimation. We compare the methods in simulations and apply them to the analyses of human immunodeficiency virus latency.  相似文献   

5.
Odds ratios approximate risk ratios when the outcome under consideration is rare but can diverge substantially from risk ratios when the outcome is common. In this paper, we derive optimal analytic conversions of odds ratios and hazard ratios to risk ratios that are minimax for the bias ratio when outcome probabilities are specified to fall in any fixed interval. The results for hazard ratios are derived under a proportional hazard assumption for the exposure. For outcome probabilities specified to lie in symmetric intervals centered around 0.5, it is shown that the square-root transformation of the odds ratio is the optimal minimax conversion for the risk ratio. General results for any nonsymmetric interval are given both for odds ratio and for hazard ratio conversions. The results are principally useful when odds ratios or hazard ratios are reported in papers, and the reader does not have access to the data or to information about the overall outcome prevalence.  相似文献   

6.
G Heller  J S Simonoff 《Biometrics》1992,48(1):101-115
Although the analysis of censored survival data using the proportional hazards and linear regression models is common, there has been little work examining the ability of these estimators to predict time to failure. This is unfortunate, since a predictive plot illustrating the relationship between time to failure and a continuous covariate can be far more informative regarding the risk associated with the covariate than a Kaplan-Meier plot obtained by discretizing the variable. In this paper the predictive power of the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-202) proportional hazards estimator and the Buckley-James (1979, Biometrika 66, 429-436) censored regression estimator are compared. Using computer simulations and heuristic arguments, it is shown that the choice of method depends on the censoring proportion, strength of the regression, the form of the censoring distribution, and the form of the failure distribution. Several examples are provided to illustrate the usefulness of the methods.  相似文献   

7.
Xu R  Harrington DP 《Biometrics》2001,57(3):875-885
A semiparametric estimate of an average regression effect with right-censored failure time data has recently been proposed under the Cox-type model where the regression effect beta(t) is allowed to vary with time. In this article, we derive a simple algebraic relationship between this average regression effect and a measurement of group differences in k-sample transformation models when the random error belongs to the G(rho) family of Harrington and Fleming (1982, Biometrika 69, 553-566), the latter being equivalent to the conditional regression effect in a gamma frailty model. The models considered here are suitable for the attenuating hazard ratios that often arise in practice. The results reveal an interesting connection among the above three classes of models as alternatives to the proportional hazards assumption and add to our understanding of the behavior of the partial likelihood estimate under nonproportional hazards. The algebraic relationship provides a simple estimator under the transformation model. We develop a variance estimator based on the empirical influence function that is much easier to compute than the previously suggested resampling methods. When there is truncation in the right tail of the failure times, we propose a method of bias correction to improve the coverage properties of the confidence intervals. The estimate, its estimated variance, and the bias correction term can all be calculated with minor modifications to standard software for proportional hazards regression.  相似文献   

8.
This paper extends the work of KODLIN (1967), who proposed a method for analyzing patient survival data wherein the hazard rate was linearly related to the survival time. The present paper extends Kodlin's model to permit maximum likelihood estimation of the parameters so that covariate effects are included and the slope and intercept parameters are allowed to change over fixed intervals of the time domain of study. An illustration of the method using multiple myeloma data is given and the results are compared with those of Kodlin's model, the Feigl-Zelen, Zippin-Armitage model, the exponential model, and Cox's proportional hazards model.  相似文献   

9.
This paper develops methodology for estimation of the effect of a binary time-varying covariate on failure times when the change time of the covariate is interval censored. The motivating example is a study of cytomegalovirus (CMV) disease in patients with human immunodeficiency virus (HIV) disease. We are interested in determining whether CMV shedding predicts an increased hazard for developing active CMV disease. Since a clinical screening test is needed to detect CMV shedding, the time that shedding begins is only known to lie in an interval bounded by the patient's last negative and first positive tests. In a Cox proportional hazards model with a time-varying covariate for CMV shedding, the partial likelihood depends on the covariate status of every individual in the risk set at each failure time. Due to interval censoring, this is not always known. To solve this problem, we use a Monte Carlo EM algorithm with a Gibbs sampler embedded in the E-step. We generate multiple completed data sets by drawing imputed exact shedding times based on the joint likelihood of the shedding times and event times under the Cox model. The method is evaluated using a simulation study and is applied to the data set described above.  相似文献   

10.
Goetghebeur E  Ryan L 《Biometrics》2000,56(4):1139-1144
We propose a semiparametric approach to the proportional hazards regression analysis of interval-censored data. An EM algorithm based on an approximate likelihood leads to an M-step that involves maximizing a standard Cox partial likelihood to estimate regression coefficients and then using the Breslow estimator for the unknown baseline hazards. The E-step takes a particularly simple form because all incomplete data appear as linear terms in the complete-data log likelihood. The algorithm of Turnbull (1976, Journal of the Royal Statistical Society, Series B 38, 290-295) is used to determine times at which the hazard can take positive mass. We found multiple imputation to yield an easily computed variance estimate that appears to be more reliable than asymptotic methods with small to moderately sized data sets. In the right-censored survival setting, the approach reduces to the standard Cox proportional hazards analysis, while the algorithm reduces to the one suggested by Clayton and Cuzick (1985, Applied Statistics 34, 148-156). The method is illustrated on data from the breast cancer cosmetics trial, previously analyzed by Finkelstein (1986, Biometrics 42, 845-854) and several subsequent authors.  相似文献   

11.
Summary The standard estimator for the cause‐specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is a weighted empirical cumulative distribution function and the other a product‐limit estimator. This equivalence suggests an alternative view of the analysis of time‐to‐event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause‐specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause‐specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time‐dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non‐AIDS related cumulative mortality.  相似文献   

12.
For randomized clinical trials where the endpoint of interest is a time-to-event subject to censoring, estimating the treatment effect has mostly focused on the hazard ratio from the Cox proportional hazards model. Since the model’s proportional hazards assumption is not always satisfied, a useful alternative, the so-called additive hazards model, may instead be used to estimate a treatment effect on the difference of hazard functions. Still, the hazards difference may be difficult to grasp intuitively, particularly in a clinical setting of, e.g., patient counseling, or resource planning. In this paper, we study the quantiles of a covariate’s conditional survival function in the additive hazards model. Specifically, we estimate the residual time quantiles, i.e., the quantiles of survival times remaining at a given time t, conditional on the survival times greater than t, for a specific covariate in the additive hazards model. We use the estimates to translate the hazards difference into the difference in residual time quantiles, which allows a more direct clinical interpretation. We determine the asymptotic properties, assess the performance via Monte-Carlo simulations, and demonstrate the use of residual time quantiles in two real randomized clinical trials.  相似文献   

13.
Li Z 《Biometrics》1999,55(1):277-283
A method of interim monitoring is described for survival trials in which the proportional hazards assumption may not hold. This method extends the test statistics based on the cumulative weighted difference in the Kaplan-Meier estimates (Pepe and Fleming, 1989, Biometrics 45, 497-507) to the sequential setting. Therefore, it provides a useful alternative to the group sequential linear rank tests. With an appropriate weight function, the test statistic itself provides an estimator for the cumulative weighted difference in survival probabilities, which is an interpretable measure for the treatment difference, especially when the proportional hazards model fails. The method is illustrated based on the design of a real trial. The operating characteristics are studied through a small simulation.  相似文献   

14.
Tian L  Lagakos S 《Biometrics》2006,62(3):821-828
We develop methods for assessing the association between a binary time-dependent covariate process and a failure time endpoint when the former is observed only at a single time point and the latter is right censored, and when the observations are subject to truncation and competing causes of failure. Using a proportional hazards model for the effect of the covariate process on the failure time of interest, we develop an approach utilizing EM algorithm and profile likelihood for estimating the relative risk parameter and cause-specific hazards for failure. The methods are extended to account for other covariates that can influence the time-dependent covariate process and cause-specific risks of failure. We illustrate the methods with data from a recent study on the association between loss of hepatitis B e antigen and the development of hepatocellular carcinoma in a population of chronic carriers of hepatitis B.  相似文献   

15.
In many longitudinal studies, it is of interest to characterize the relationship between a time-to-event (e.g. survival) and several time-dependent and time-independent covariates. Time-dependent covariates are generally observed intermittently and with error. For a single time-dependent covariate, a popular approach is to assume a joint longitudinal data-survival model, where the time-dependent covariate follows a linear mixed effects model and the hazard of failure depends on random effects and time-independent covariates via a proportional hazards relationship. Regression calibration and likelihood or Bayesian methods have been advocated for implementation; however, generalization to more than one time-dependent covariate may become prohibitive. For a single time-dependent covariate, Tsiatis and Davidian (2001) have proposed an approach that is easily implemented and does not require an assumption on the distribution of the random effects. This technique may be generalized to multiple, possibly correlated, time-dependent covariates, as we demonstrate. We illustrate the approach via simulation and by application to data from an HIV clinical trial.  相似文献   

16.
Estimation in a Cox proportional hazards cure model   总被引:7,自引:0,他引:7  
Sy JP  Taylor JM 《Biometrics》2000,56(1):227-236
Some failure time data come from a population that consists of some subjects who are susceptible to and others who are nonsusceptible to the event of interest. The data typically have heavy censoring at the end of the follow-up period, and a standard survival analysis would not always be appropriate. In such situations where there is good scientific or empirical evidence of a nonsusceptible population, the mixture or cure model can be used (Farewell, 1982, Biometrics 38, 1041-1046). It assumes a binary distribution to model the incidence probability and a parametric failure time distribution to model the latency. Kuk and Chen (1992, Biometrika 79, 531-541) extended the model by using Cox's proportional hazards regression for the latency. We develop maximum likelihood techniques for the joint estimation of the incidence and latency regression parameters in this model using the nonparametric form of the likelihood and an EM algorithm. A zero-tail constraint is used to reduce the near nonidentifiability of the problem. The inverse of the observed information matrix is used to compute the standard errors. A simulation study shows that the methods are competitive to the parametric methods under ideal conditions and are generally better when censoring from loss to follow-up is heavy. The methods are applied to a data set of tonsil cancer patients treated with radiation therapy.  相似文献   

17.
Li E  Wang N  Wang NY 《Biometrics》2007,63(4):1068-1078
Summary .   Joint models are formulated to investigate the association between a primary endpoint and features of multiple longitudinal processes. In particular, the subject-specific random effects in a multivariate linear random-effects model for multiple longitudinal processes are predictors in a generalized linear model for primary endpoints. Li, Zhang, and Davidian (2004, Biometrics 60 , 1–7) proposed an estimation procedure that makes no distributional assumption on the random effects but assumes independent within-subject measurement errors in the longitudinal covariate process. Based on an asymptotic bias analysis, we found that their estimators can be biased when random effects do not fully explain the within-subject correlations among longitudinal covariate measurements. Specifically, the existing procedure is fairly sensitive to the independent measurement error assumption. To overcome this limitation, we propose new estimation procedures that require neither a distributional or covariance structural assumption on covariate random effects nor an independence assumption on within-subject measurement errors. These new procedures are more flexible, readily cover scenarios that have multivariate longitudinal covariate processes, and can be implemented using available software. Through simulations and an analysis of data from a hypertension study, we evaluate and illustrate the numerical performances of the new estimators.  相似文献   

18.
Model misspecification in proportional hazards regression   总被引:1,自引:0,他引:1  
The proportional hazards model is frequently used to evaluatethe effect of treatment on failure time events in randomisedclinical trials. Concomitant variables are usually availableand may be considered for use in the primary analyses underthe assumption that incorporating them may reduce bias or improveefficiency. In this paper we consider two approaches to includingcovariate information: regression modelling and stratification.We focus on the setting where covariate effects are nonproportionaland we compare the bias, efficiency and coverage propertiesof these approaches. These results indicate that our intuitionbased on linear model analysis of covariance is misleading.Covariate adjustment in proportional hazards models has littleeffect on the variance but may significantly improve the accuracyof the treatment effect estimator.  相似文献   

19.
O'Brien's logit-rank procedure (1978, Biometrics 34, 243-250) is shown to arise as a score test based on the partial likelihood for a proportional hazards model provided the covariate structure is suitably defined. Within this framework the asymptotic properties claimed by O'Brien can be readily deduced and can be seen to be valid under a more general model of censoring than that considered in his paper. More important, perhaps, it is now possible to make a more natural and interpretable generalization to the multiple regression problem than that suggested by O'Brien as a means of accounting for the effects of nuisance covariates. This can be achieved either by modelling or stratification. The proportional hazards framework is also helpful in that it enables us to recognize the logit-rank procedure as being one member of a class of contending procedures. One consequence of this is that the relative efficiencies of any two procedures can be readily evaluated using the results of Lagakos (1988, Biometrika 75, 156-160). Our own evaluations suggest that, for non-time-dependent covariates, a simplification of the logit-rank procedure, leading to considerable reduction in computational complexity, is to be preferred to the procedure originally outlined by O'Brien.  相似文献   

20.
Xuan Mao C  You N 《Biometrics》2009,65(2):547-553
Summary .  A mixture model is a natural choice to deal with individual heterogeneity in capture–recapture studies. Pledger (2000, Biometrics 56, 434–442; 2005, Biometrics 61, 868–876) advertised the use of the two-point mixture model. Dorazio and Royle (2003, Biometrics 59, 351–364; 2005, Biometrics 61, 874–876) suggested that the beta-binomial model has advantages. The controversy is related to the nonidentifiability of the population size ( Link, 2003 , Biometrics 59, 1123–1130) and certain boundary problems. The total bias is decomposed into an intrinsic bias, an approximation bias, and an estimation bias. We propose to assess the approximation bias, the estimation bias, and the variance, with the intrinsic bias excluded when comparing different estimators. The boundary problems in both models and their impacts are investigated. Real epidemiological and ecological examples are analyzed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号