首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
D P Byar  N Mantel 《Biometrics》1975,31(4):943-947
Interrelationships among three response-time models which incorporate covariate information are explored. The most general of these models is the logistic-exponential in which the log odds of the probability of responding in a fixed interval is assumed to be a linear function of the covariates; this model includes a parameter W for the width of discrete time intervals in which responses occur. As W leads to O this model is equivalent to a continuous time exponential model in which the log hazard is linear in the covariates. As W leads to infininity it is equivalent to a continuous time exponential model in which the hazard itself is a linear function of the covariates. This second model was fitted to the data used in an earlier publication describing the logistic exponential model, and very close agreement of the estimates of the regression coefficients is demonstrated.  相似文献   

2.
Ross EA  Moore D 《Biometrics》1999,55(3):813-819
We have developed methods for modeling discrete or grouped time, right-censored survival data collected from correlated groups or clusters. We assume that the marginal hazard of failure for individual items within a cluster is specified by a linear log odds survival model and the dependence structure is based on a gamma frailty model. The dependence can be modeled as a function of cluster-level covariates. Likelihood equations for estimating the model parameters are provided. Generalized estimating equations for the marginal hazard regression parameters and pseudolikelihood methods for estimating the dependence parameters are also described. Data from two clinical trials are used for illustration purposes.  相似文献   

3.
Huang J  Harrington D 《Biometrics》2002,58(4):781-791
The Cox proportional hazards model is often used for estimating the association between covariates and a potentially censored failure time, and the corresponding partial likelihood estimators are used for the estimation and prediction of relative risk of failure. However, partial likelihood estimators are unstable and have large variance when collinearity exists among the explanatory variables or when the number of failures is not much greater than the number of covariates of interest. A penalized (log) partial likelihood is proposed to give more accurate relative risk estimators. We show that asymptotically there always exists a penalty parameter for the penalized partial likelihood that reduces mean squared estimation error for log relative risk, and we propose a resampling method to choose the penalty parameter. Simulations and an example show that the bootstrap-selected penalized partial likelihood estimators can, in some instances, have smaller bias than the partial likelihood estimators and have smaller mean squared estimation and prediction errors of log relative risk. These methods are illustrated with a data set in multiple myeloma from the Eastern Cooperative Oncology Group.  相似文献   

4.
Du P  Jiang Y  Wang Y 《Biometrics》2011,67(4):1330-1339
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.  相似文献   

5.
Liang Li  Bo Hu  Tom Greene 《Biometrics》2009,65(3):737-745
Summary .  In many longitudinal clinical studies, the level and progression rate of repeatedly measured biomarkers on each subject quantify the severity of the disease and that subject's susceptibility to progression of the disease. It is of scientific and clinical interest to relate such quantities to a later time-to-event clinical endpoint such as patient survival. This is usually done with a shared parameter model. In such models, the longitudinal biomarker data and the survival outcome of each subject are assumed to be conditionally independent given subject-level severity or susceptibility (also called frailty in statistical terms). In this article, we study the case where the conditional distribution of longitudinal data is modeled by a linear mixed-effect model, and the conditional distribution of the survival data is given by a Cox proportional hazard model. We allow unknown regression coefficients and time-dependent covariates in both models. The proposed estimators are maximizers of an exact correction to the joint log likelihood with the frailties eliminated as nuisance parameters, an idea that originated from correction of covariate measurement error in measurement error models. The corrected joint log likelihood is shown to be asymptotically concave and leads to consistent and asymptotically normal estimators. Unlike most published methods for joint modeling, the proposed estimation procedure does not rely on distributional assumptions of the frailties. The proposed method was studied in simulations and applied to a data set from the Hemodialysis Study.  相似文献   

6.
We propose new resampling‐based approaches to construct asymptotically valid time‐simultaneous confidence bands for cumulative hazard functions in multistate Cox models. In particular, we exemplify the methodology in detail for the simple Cox model with time‐dependent covariates, where the data may be subject to independent right‐censoring or left‐truncation. We use simulations to investigate their finite sample behavior. Finally, the methods are utilized to analyze two empirical examples with survival and competing risks data.  相似文献   

7.
This paper deals with hazard regression models for survival data with time-dependent covariates consisting of updated quantitative measurements. The main emphasis is on the Cox proportional hazards model but also additive hazard models are discussed. Attenuation of regression coefficients caused by infrequent updating of covariates is evaluated using simulated data mimicking our main example, the CSL1 liver cirrhosis trial. We conclude that the degree of attenuation depends on the type of stochastic process describing the time-dependent covariate and that attenuation may be substantial for an Ornstein-Uhlenbeck process. Also trends in the covariate combined with non-synchronous updating may cause attenuation. Simple methods to adjust for infrequent updating of covariates are proposed and compared to existing techniques using both simulations and the CSL1 data. The comparison shows that while existing, more complicated methods may work well with frequent updating of covariates the simpler techniques may have advantages in larger data sets with infrequent updatings.  相似文献   

8.

Background

Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking.

Methods

By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM) can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated) and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population.

Findings

PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect) but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data.

Conclusions

By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial results under proportional and non-proportional hazards.  相似文献   

9.
For multicenter randomized trials or multilevel observational studies, the Cox regression model has long been the primary approach to study the effects of covariates on time-to-event outcomes. A critical assumption of the Cox model is the proportionality of the hazard functions for modeled covariates, violations of which can result in ambiguous interpretations of the hazard ratio estimates. To address this issue, the restricted mean survival time (RMST), defined as the mean survival time up to a fixed time in a target population, has been recommended as a model-free target parameter. In this article, we generalize the RMST regression model to clustered data by directly modeling the RMST as a continuous function of restriction times with covariates while properly accounting for within-cluster correlations to achieve valid inference. The proposed method estimates regression coefficients via weighted generalized estimating equations, coupled with a cluster-robust sandwich variance estimator to achieve asymptotically valid inference with a sufficient number of clusters. In small-sample scenarios where a limited number of clusters are available, however, the proposed sandwich variance estimator can exhibit negative bias in capturing the variability of regression coefficient estimates. To overcome this limitation, we further propose and examine bias-corrected sandwich variance estimators to reduce the negative bias of the cluster-robust sandwich variance estimator. We study the finite-sample operating characteristics of proposed methods through simulations and reanalyze two multicenter randomized trials.  相似文献   

10.
Hazard rate models with covariates.   总被引:3,自引:0,他引:3  
Many problems, particularly in medical research, concern the relationship between certain covariates and the time to occurrence of an event. The hazard or failure rate function provides a conceptually simple representation of time to occurrence data that readily adapts to include such generalizations as competing risks and covariates that vary with time. Two partially parametric models for the hazard function are considered. These are the proportional hazards model of Cox (1972) and the class of log-linear or accelerated failure time models. A synthesis of the literature on estimation from these models under prospective sampling indicates that, although important advances have occurred during the past decade, further effort is warranted on such topics as distribution theory, tests of fit, robustness, and the full utilization of a methodology that permits non-standard features. It is further argued that a good deal of fruitful research could be done on applying the same models under a variety of other sampling schemes. A discussion of estimation from case-control studies illustrates this point.  相似文献   

11.
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.  相似文献   

12.
The focus of many medical applications is to model the impact of several factors on time to an event. A standard approach for such analyses is the Cox proportional hazards model. It assumes that the factors act linearly on the log hazard function (linearity assumption) and that their effects are constant over time (proportional hazards (PH) assumption). Variable selection is often required to specify a more parsimonious model aiming to include only variables with an influence on the outcome. As follow-up increases the effect of a variable often gets weaker, which means that it varies in time. However, spurious time-varying effects may also be introduced by mismodelling other parts of the multivariable model, such as omission of an important covariate or an incorrect functional form of a continuous covariate. These issues interact. To check whether the effect of a variable varies in time several tests for non-PH have been proposed. However, they are not sufficient to derive a model, as appropriate modelling of the shape of time-varying effects is required. In three examples we will compare five recently published strategies to assess whether and how the effects of covariates from a multivariable model vary in time. For practical use we will give some recommendations.  相似文献   

13.
In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as "conditional" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation.  相似文献   

14.
The Cox proportional hazards model has become the standard in biomedical studies, particularly for settings in which the estimation covariate effects (as opposed to prediction) is the primary objective. In spite of the obvious flexibility of this approach and its wide applicability, the model is not usually chosen for its fit to the data, but by convention and for reasons of convenience. It is quite possible that the covariates add to, rather than multiply the baseline hazard, making an additive hazards model a more suitable choice. Typically, proportionality is assumed, with the potential for additive covariate effects not evaluated or even seriously considered. Contributing to this phenomenon is the fact that many popular software packages (e.g., SAS, S-PLUS/R) have standard procedures to fit the Cox model (e.g., proc phreg, coxph), but as of yet no analogous procedures to fit its additive analog, the Lin and Ying (1994) semiparametric additive hazards model. In this article, we establish the connections between the Lin and Ying (1994) model and both Cox and least squares regression. We demonstrate how SAS's phreg and reg procedures may be used to fit the additive hazards model, after some straightforward data manipulations. We then apply the additive hazards model to examine the relationship between Model for End-stage Liver Disease (MELD) score and mortality among patients wait-listed for liver transplantation.  相似文献   

15.
16.
The semiparametric Cox proportional hazards model is routinely adopted to model time-to-event data. Proportionality is a strong assumption, especially when follow-up time, or study duration, is long. Zeng and Lin (J. R. Stat. Soc., Ser. B, 69:1–30, 2007) proposed a useful generalisation through a family of transformation models which allow hazard ratios to vary over time. In this paper we explore a variety of tests for the need for transformation, arguing that the Cox model is so ubiquitous that it should be considered as the default model, to be discarded only if there is good evidence against the model assumptions. Since fitting an alternative transformation model is more complicated than fitting the Cox model, especially as procedures are not yet incorporated in standard software, we focus mainly on tests which require a Cox fit only. A score test is derived, and we also consider performance of omnibus goodness-of-fit tests based on Schoenfeld residuals. These tests can be extended to compare different transformation models. In addition we explore the consequences of fitting a misspecified Cox model to data generated under a true transformation model. Data on survival of 1043 leukaemia patients are used for illustration.  相似文献   

17.
This article investigates an augmented inverse selection probability weighted estimator for Cox regression parameter estimation when covariate variables are incomplete. This estimator extends the Horvitz and Thompson (1952, Journal of the American Statistical Association 47, 663-685) weighted estimator. This estimator is doubly robust because it is consistent as long as either the selection probability model or the joint distribution of covariates is correctly specified. The augmentation term of the estimating equation depends on the baseline cumulative hazard and on a conditional distribution that can be implemented by using an EM-type algorithm. This method is compared with some previously proposed estimators via simulation studies. The method is applied to a real example.  相似文献   

18.
In randomized clinical trials where the times to event of two treatment groups are compared under a proportional hazards assumption, it has been established that omitting prognostic factors from the model entails an underestimation of the hazards ratio. Heterogeneity due to unobserved covariates in cancer patient populations is a concern since genomic investigations have revealed molecular and clinical heterogeneity in these populations. In HIV prevention trials, heterogeneity is unavoidable and has been shown to decrease the treatment effect over time. This article assesses the influence of trial duration on the bias of the estimated hazards ratio resulting from omitting covariates from the Cox analysis. The true model is defined by including an unobserved random frailty term in the individual hazard that reflects the omitted covariate. Three frailty distributions are investigated: gamma, log‐normal, and binary, and the asymptotic bias of the hazards ratio estimator is calculated. We show that the attenuation of the treatment effect resulting from unobserved heterogeneity strongly increases with trial duration, especially for continuous frailties that are likely to reflect omitted covariates, as they are often encountered in practice. The possibility of interpreting the long‐term decrease in treatment effects as a bias induced by heterogeneity and trial duration is illustrated by a trial in oncology where adjuvant chemotherapy in stage 1B NSCLC was investigated.  相似文献   

19.
Parzen M  Lipsitz SR 《Biometrics》1999,55(2):580-584
In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.  相似文献   

20.
Fully Bayesian methods for Cox models specify a model for the baseline hazard function. Parametric approaches generally provide monotone estimations. Semi‐parametric choices allow for more flexible patterns but they can suffer from overfitting and instability. Regularization methods through prior distributions with correlated structures usually give reasonable answers to these types of situations. We discuss Bayesian regularization for Cox survival models defined via flexible baseline hazards specified by a mixture of piecewise constant functions and by a cubic B‐spline function. For those “semi‐parametric” proposals, different prior scenarios ranging from prior independence to particular correlated structures are discussed in a real study with microvirulence data and in an extensive simulation scenario that includes different data sample and time axis partition sizes in order to capture risk variations. The posterior distribution of the parameters was approximated using Markov chain Monte Carlo methods. Model selection was performed in accordance with the deviance information criteria and the log pseudo‐marginal likelihood. The results obtained reveal that, in general, Cox models present great robustness in covariate effects and survival estimates independent of the baseline hazard specification. In relation to the “semi‐parametric” baseline hazard specification, the B‐splines hazard function is less dependent on the regularization process than the piecewise specification because it demands a smaller time axis partition to estimate a similar behavior of the risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号