首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
  相似文献   

2.
This paper deals with hazard regression models for survival data with time-dependent covariates consisting of updated quantitative measurements. The main emphasis is on the Cox proportional hazards model but also additive hazard models are discussed. Attenuation of regression coefficients caused by infrequent updating of covariates is evaluated using simulated data mimicking our main example, the CSL1 liver cirrhosis trial. We conclude that the degree of attenuation depends on the type of stochastic process describing the time-dependent covariate and that attenuation may be substantial for an Ornstein-Uhlenbeck process. Also trends in the covariate combined with non-synchronous updating may cause attenuation. Simple methods to adjust for infrequent updating of covariates are proposed and compared to existing techniques using both simulations and the CSL1 data. The comparison shows that while existing, more complicated methods may work well with frequent updating of covariates the simpler techniques may have advantages in larger data sets with infrequent updatings.  相似文献   

3.
    
This paper focuses on the methodology developed for analyzing a multivariate interval-censored data set from an AIDS observational study. A purpose of the study was to determine the natural history of the opportunistic infection cytomeglovirus (CMV) in an HIV-infected individual. For this observational study, laboratory tests were performed at scheduled clinic visits to test for the presence of the CMV virus in the blood and in the urine (called CMV shedding in the blood and urine). The study investigators were interested in determining whether the stage of HIV disease at study entry was predictive of an increased risk for CMV shedding in either the blood or the urine. If all patients had made each clinic visit, the data would be multivariate grouped failure time data and published methods could be used. However, many patients missed several visits, and when they returned, their lab tests indicated a change in their blood and/or urine CMV shedding status, resulting in interval-censored failure time data. This paper outlines a method for applying the proportional hazards model to the analysis of multivariate interval-censored failure time data from a study of CMV in HIV-infected patients.  相似文献   

4.
5.
    
The estimation of the unknown parameters in the stratified Cox's proportional hazard model is a typical example of the trade‐off between bias and precision. The stratified partial likelihood estimator is unbiased when the number of strata is large but suffer from being unstable when many strata are non‐informative about the unknown parameters. The estimator obtained by ignoring the heterogeneity among strata, on the other hand, increases the precision of estimates although pays the price for being biased. An estimating procedure, based on the asymptotic properties of the above two estimators, serving to compromise between bias and precision is proposed. Two examples in a radiosurgery for brain metastases study provide some interesting demonstration of such applications.  相似文献   

6.
The Cox regression model is one of the most widely used models to incorporate covariates. The frequently used partial likelihood estimator of the regression parameter has to be computed iteratively. In this paper we propose a noniterative estimator for the regression parameter and show that under certain conditions it dominates another noniterative estimator derived by Kalbfleish and Prentice. The new estimator is demonstrated on lifetime data of rats having been subject to insult with a carcinogen.  相似文献   

7.
    
  相似文献   

8.
    
Clustered interval‐censored data commonly arise in many studies of biomedical research where the failure time of interest is subject to interval‐censoring and subjects are correlated for being in the same cluster. A new semiparametric frailty probit regression model is proposed to study covariate effects on the failure time by accounting for the intracluster dependence. Under the proposed normal frailty probit model, the marginal distribution of the failure time is a semiparametric probit model, the regression parameters can be interpreted as both the conditional covariate effects given frailty and the marginal covariate effects up to a multiplicative constant, and the intracluster association can be summarized by two nonparametric measures in simple and explicit form. A fully Bayesian estimation approach is developed based on the use of monotone splines for the unknown nondecreasing function and a data augmentation using normal latent variables. The proposed Gibbs sampler is straightforward to implement since all unknowns have standard form in their full conditional distributions. The proposed method performs very well in estimating the regression parameters as well as the intracluster association, and the method is robust to frailty distribution misspecifications as shown in our simulation studies. Two real‐life data sets are analyzed for illustration.  相似文献   

9.
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.  相似文献   

10.
    
The stratified Cox proportional hazards model is introduced to incorporate covariates and involve nonproportional treatment effect of two groups into the analysis and then the confidence interval estimators for the difference in median survival times of two treatments in stratified Cox model are proposed. The one is based on baseline survival functions of two groups, and the other on average survival functions of two groups. I illustrate the proposed methods with an example from a study conducted by the Radiation Therapy Oncology Group in cancer of the mouth and throat. Simulations are carried out to investigate the small‐sample properties of proposed methods in terms of coverage rates.  相似文献   

11.
12.
    
In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as \"conditional\" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation.  相似文献   

13.
    
We study bias-reduced estimators of exponentially transformed parameters in general linear models (GLMs) and show how they can be used to obtain bias-reduced conditional (or unconditional) odds ratios in matched case-control studies. Two options are considered and compared: the explicit approach and the implicit approach. The implicit approach is based on the modified score function where bias-reduced estimates are obtained by using iterative procedures to solve the modified score equations. The explicit approach is shown to be a one-step approximation of this iterative procedure. To apply these approaches for the conditional analysis of matched case-control studies, with potentially unmatched confounding and with several exposures, we utilize the relation between the conditional likelihood and the likelihood of the unconditional logit binomial GLM for matched pairs and Cox partial likelihood for matched sets with appropriately setup data. The properties of the estimators are evaluated by using a large Monte Carlo simulation study and an illustration of a real dataset is shown. Researchers reporting the results on the exponentiated scale should use bias-reduced estimators since otherwise the effects can be under or overestimated, where the magnitude of the bias is especially large in studies with smaller sample sizes.  相似文献   

14.
In survivorship modelling using the proportional hazards model of Cox (1972, Journal of the Royal Statistical Society, Series B, 34, 187–220), it is often desired to test a subset of the vector of unknown regression parameters β in the expression for the hazard rate at time t. The likelihood ratio test statistic is well behaved in most situations but may be expensive to calculate. The Wald (1943, Transactions of the American Mathematical Society 54, 426–482) test statistic is easier to calculate, but has some drawbacks. In testing a single parameter in a binomial logit model, Hauck and Donner (1977, Journal of the American Statistical Association 72, 851–853) show that the Wald statistic decreases to zero the further the parameter estimate is from the null and that the asymptotic power of the test decreases to the significance level. The Wald statistic is extensively used in statistical software packages for survivorship modelling and it is therefore important to understand its behavior. The present work examines empirically the behavior of the Wald statistic under various departures from the null hypothesis and under the presence of Type I censoring and covariates in the model. It is shown via examples that the Wald statistic's behavior is not as aberrant as found for the logistic model. For the single parameter case, the asymptotic non-null distribution of the Wald statistic is examined.  相似文献   

15.
  总被引:2,自引:0,他引:2  
Huang Y  Dagne G 《Biometrics》2012,68(3):943-953
Summary It is a common practice to analyze complex longitudinal data using semiparametric nonlinear mixed-effects (SNLME) models with a normal distribution. Normality assumption of model errors may unrealistically obscure important features of subject variations. To partially explain between- and within-subject variations, covariates are usually introduced in such models, but some covariates may often be measured with substantial errors. Moreover, the responses may be missing and the missingness may be nonignorable. Inferential procedures can be complicated dramatically when data with skewness, missing values, and measurement error are observed. In the literature, there has been considerable interest in accommodating either skewness, incompleteness or covariate measurement error in such models, but there has been relatively little study concerning all three features simultaneously. In this article, our objective is to address the simultaneous impact of skewness, missingness, and covariate measurement error by jointly modeling the response and covariate processes based on a flexible Bayesian SNLME model. The method is illustrated using a real AIDS data set to compare potential models with various scenarios and different distribution specifications.  相似文献   

16.
17.
    
In functional data analysis for longitudinal data, the observation process is typically assumed to be noninformative, which is often violated in real applications. Thus, methods that fail to account for the dependence between observation times and longitudinal outcomes may result in biased estimation. For longitudinal data with informative observation times, we find that under a general class of shared random effect models, a commonly used functional data method may lead to inconsistent model estimation while another functional data method results in consistent and even rate-optimal estimation. Indeed, we show that the mean function can be estimated appropriately via penalized splines and that the covariance function can be estimated appropriately via penalized tensor-product splines, both with specific choices of parameters. For the proposed method, theoretical results are provided, and simulation studies and a real data analysis are conducted to demonstrate its performance.  相似文献   

18.
    
Chang CC  Weissfeld LA 《Biometrics》1999,55(4):1114-1119
We discuss two diagnostic methods for assessing the accuracy of the normal approximated confidence region to the likelihood-based confidence region for the Cox proportional hazards model with censored data. The proposed diagnostic methods are extensions of the contour measures of Hodges (1987, Journal of the American Statistical Association 82, 149-154) and Cook and Tsai (1990, Journal of the American Statistical Association 85, 770-777) and the curvature measures of Jennings (1986, Journal of the American Statistical Association 81, 471-476) and Cook and Tsai (1990). These methods are also illustrated in a study of hepatocyte growth factor in patients with lung cancer and a Mayo Clinic randomized study of participants with primary biliary cirrhosis.  相似文献   

19.
    
The Cox model—which remains the first choice for analyzing time-to-event data, even for large data sets—relies on the proportional hazards (PH) assumption. When survival data arrive sequentially in chunks, a fast and minimally storage intensive approach to test the PH assumption is desirable. We propose an online updating approach that updates the standard test statistic as each new block of data becomes available and greatly lightens the computational burden. Under the null hypothesis of PH, the proposed statistic is shown to have the same asymptotic distribution as the standard version computed on an entire data stream with the data blocks pooled into one data set. In simulation studies, the test and its variant based on most recent data blocks maintain their sizes when the PH assumption holds and have substantial power to detect different violations of the PH assumption. We also show in simulation that our approach can be used successfully with “big data” that exceed a single computer's computational resources. The approach is illustrated with the survival analysis of patients with lymphoma cancer from the Surveillance, Epidemiology, and End Results Program. The proposed test promptly identified deviation from the PH assumption, which was not captured by the test based on the entire data.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号