首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper discusses multivariate interval-censored failure time data that occur when there exist several correlated survival times of interest and only interval-censored data are available for each survival time. Such data occur in many fields. One is tumorigenicity experiments, which usually concern different types of tumors, tumors occurring in different locations of animals, or together. For regression analysis of such data, we develop a marginal inference approach using the additive hazards model and apply it to a set of bivariate interval-censored data arising from a tumorigenicity experiment. Simulation studies are conducted for the evaluation of the presented approach and suggest that the approach performs well for practical situations.  相似文献   

2.
Clustered interval-censored failure time data occur when the failure times of interest are clustered into small groups and known only to lie in certain intervals. A number of methods have been proposed for regression analysis of clustered failure time data, but most of them apply only to clustered right-censored data. In this paper, a sieve estimation procedure is proposed for fitting a Cox frailty model to clustered interval-censored failure time data. In particular, a two-step algorithm for parameter estimation is developed and the asymptotic properties of the resulting sieve maximum likelihood estimators are established. The finite sample properties of the proposed estimators are investigated through a simulation study and the method is illustrated by the data arising from a lymphatic filariasis study.  相似文献   

3.
In this paper, we consider incomplete survival data: partly interval-censored failure time data where observed data include both exact and interval-censored observations on the survival time of interest. We present a class of generalized log-rank tests for this type of survival data and establish their asymptotic properties. The method is evaluated using simulation studies and illustrated by a set of real data from a diabetes study.  相似文献   

4.
Summary In this article, we propose a positive stable shared frailty Cox model for clustered failure time data where the frailty distribution varies with cluster‐level covariates. The proposed model accounts for covariate‐dependent intracluster correlation and permits both conditional and marginal inferences. We obtain marginal inference directly from a marginal model, then use a stratified Cox‐type pseudo‐partial likelihood approach to estimate the regression coefficient for the frailty parameter. The proposed estimators are consistent and asymptotically normal and a consistent estimator of the covariance matrix is provided. Simulation studies show that the proposed estimation procedure is appropriate for practical use with a realistic number of clusters. Finally, we present an application of the proposed method to kidney transplantation data from the Scientific Registry of Transplant Recipients.  相似文献   

5.
Left-, right-, and interval-censored response time data arise in a variety of settings, including the analyses of data from laboratory animal carcinogenicity experiments, clinical trials, and longitudinal studies. For such incomplete data, the usual regression techniques such as the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-220) proportional hazards model are inapplicable. In this paper, we present a method for regression analysis which accommodates interval-censored data. We present applications of this methodology to data sets from a study of breast cancer patients who were followed for cosmetic response to therapy, a small animal tumorigenicity study, and a clinical trial.  相似文献   

6.

Interval-censored failure times arise when the status with respect to an event of interest is only determined at intermittent examination times. In settings where there exists a sub-population of individuals who are not susceptible to the event of interest, latent variable models accommodating a mixture of susceptible and nonsusceptible individuals are useful. We consider such models for the analysis of bivariate interval-censored failure time data with a model for bivariate binary susceptibility indicators and a copula model for correlated failure times given joint susceptibility. We develop likelihood, composite likelihood, and estimating function methods for model fitting and inference, and assess asymptotic-relative efficiency and finite sample performance. Extensions dealing with higher-dimensional responses and current status data are also described.

  相似文献   

7.
The accelerated failure time regression model is most commonly used with right-censored survival data. This report studies the use of a Weibull-based accelerated failure time regression model when left- and interval-censored data are also observed. Two alternative methods of analysis are considered. First, the maximum likelihood estimates (MLEs) for the observed censoring pattern are computed. These are compared with estimates where midpoints are substituted for left- and interval-censored data (midpoint estimator, or MDE). Simulation studies indicate that for relatively large samples there are many instances when the MLE is superior to the MDE. For samples where the hazard rate is flat or nearly so, or where the percentage of interval-censored data is small, the MDE is adequate. An example using Framingham Heart Study data is discussed.  相似文献   

8.
Pan W 《Biometrics》2000,56(1):199-203
We propose a general semiparametric method based on multiple imputation for Cox regression with interval-censored data. The method consists of iterating the following two steps. First, from finite-interval-censored (but not right-censored) data, exact failure times are imputed using Tanner and Wei's poor man's or asymptotic normal data augmentation scheme based on the current estimates of the regression coefficient and the baseline survival curve. Second, a standard statistical procedure for right-censored data, such as the Cox partial likelihood method, is applied to imputed data to update the estimates. Through simulation, we demonstrate that the resulting estimate of the regression coefficient and its associated standard error provide a promising alternative to the nonparametric maximum likelihood estimate. Our proposal is easily implemented by taking advantage of existing computer programs for right-censored data.  相似文献   

9.
Clustered interval‐censored data commonly arise in many studies of biomedical research where the failure time of interest is subject to interval‐censoring and subjects are correlated for being in the same cluster. A new semiparametric frailty probit regression model is proposed to study covariate effects on the failure time by accounting for the intracluster dependence. Under the proposed normal frailty probit model, the marginal distribution of the failure time is a semiparametric probit model, the regression parameters can be interpreted as both the conditional covariate effects given frailty and the marginal covariate effects up to a multiplicative constant, and the intracluster association can be summarized by two nonparametric measures in simple and explicit form. A fully Bayesian estimation approach is developed based on the use of monotone splines for the unknown nondecreasing function and a data augmentation using normal latent variables. The proposed Gibbs sampler is straightforward to implement since all unknowns have standard form in their full conditional distributions. The proposed method performs very well in estimating the regression parameters as well as the intracluster association, and the method is robust to frailty distribution misspecifications as shown in our simulation studies. Two real‐life data sets are analyzed for illustration.  相似文献   

10.
This paper focuses on the methodology developed for analyzing a multivariate interval-censored data set from an AIDS observational study. A purpose of the study was to determine the natural history of the opportunistic infection cytomeglovirus (CMV) in an HIV-infected individual. For this observational study, laboratory tests were performed at scheduled clinic visits to test for the presence of the CMV virus in the blood and in the urine (called CMV shedding in the blood and urine). The study investigators were interested in determining whether the stage of HIV disease at study entry was predictive of an increased risk for CMV shedding in either the blood or the urine. If all patients had made each clinic visit, the data would be multivariate grouped failure time data and published methods could be used. However, many patients missed several visits, and when they returned, their lab tests indicated a change in their blood and/or urine CMV shedding status, resulting in interval-censored failure time data. This paper outlines a method for applying the proportional hazards model to the analysis of multivariate interval-censored failure time data from a study of CMV in HIV-infected patients.  相似文献   

11.
Pan W  Chappell R 《Biometrics》2002,58(1):64-70
We show that the nonparametric maximum likelihood estimate (NPMLE) of the regression coefficient from the joint likelihood (of the regression coefficient and the baseline survival) works well for the Cox proportional hazards model with left-truncated and interval-censored data, but the NPMLE may underestimate the baseline survival. Two alternatives are also considered: first, the marginal likelihood approach by extending Satten (1996, Biometrika 83, 355-370) to truncated data, where the baseline distribution is eliminated as a nuisance parameter; and second, the monotone maximum likelihood estimate that maximizes the joint likelihood by assuming that the baseline distribution has a nondecreasing hazard function, which was originally proposed to overcome the underestimation of the survival from the NPMLE for left-truncated data without covariates (Tsai, 1988, Biometrika 75, 319-324). The bootstrap is proposed to draw inference. Simulations were conducted to assess their performance. The methods are applied to the Massachusetts Health Care Panel Study data set to compare the probabilities of losing functional independence for male and female seniors.  相似文献   

12.
Fully Bayesian methods for Cox models specify a model for the baseline hazard function. Parametric approaches generally provide monotone estimations. Semi‐parametric choices allow for more flexible patterns but they can suffer from overfitting and instability. Regularization methods through prior distributions with correlated structures usually give reasonable answers to these types of situations. We discuss Bayesian regularization for Cox survival models defined via flexible baseline hazards specified by a mixture of piecewise constant functions and by a cubic B‐spline function. For those “semi‐parametric” proposals, different prior scenarios ranging from prior independence to particular correlated structures are discussed in a real study with microvirulence data and in an extensive simulation scenario that includes different data sample and time axis partition sizes in order to capture risk variations. The posterior distribution of the parameters was approximated using Markov chain Monte Carlo methods. Model selection was performed in accordance with the deviance information criteria and the log pseudo‐marginal likelihood. The results obtained reveal that, in general, Cox models present great robustness in covariate effects and survival estimates independent of the baseline hazard specification. In relation to the “semi‐parametric” baseline hazard specification, the B‐splines hazard function is less dependent on the regularization process than the piecewise specification because it demands a smaller time axis partition to estimate a similar behavior of the risk.  相似文献   

13.
Owing to its robustness properties, marginal interpretations, and ease of implementation, the pseudo-partial likelihood method proposed in the seminal papers of Pepe and Cai and Lin et al. has become the default approach for analyzing recurrent event data with Cox-type proportional rate models. However, the construction of the pseudo-partial score function ignores the dependency among recurrent events and thus can be inefficient. An attempt to investigate the asymptotic efficiency of weighted pseudo-partial likelihood estimation found that the optimal weight function involves the unknown variance–covariance process of the recurrent event process and may not have closed-form expression. Thus, instead of deriving the optimal weights, we propose to combine a system of pre-specified weighted pseudo-partial score equations via the generalized method of moments and empirical likelihood estimation. We show that a substantial efficiency gain can be easily achieved without imposing additional model assumptions. More importantly, the proposed estimation procedures can be implemented with existing software. Theoretical and numerical analyses show that the empirical likelihood estimator is more appealing than the generalized method of moments estimator when the sample size is sufficiently large. An analysis of readmission risk in colorectal cancer patients is presented to illustrate the proposed methodology.  相似文献   

14.
Fleming TR  Lin DY 《Biometrics》2000,56(4):971-983
The field of survival analysis emerged in the 20th century and experienced tremendous growth during the latter half of the century. The developments in this field that have had the most profound impact on clinical trials are the Kaplan-Meier (1958, Journal of the American Statistical Association 53, 457-481) method for estimating the survival function, the log-rank statistic (Mantel, 1966, Cancer Chemotherapy Report 50, 163-170) for comparing two survival distributions, and the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-220) proportional hazards model for quantifying the effects of covariates on the survival time. The counting-process martingale theory pioneered by Aalen (1975, Statistical inference for a family of counting processes, Ph.D. dissertation, University of California, Berkeley) provides a unified framework for studying the small- and large-sample properties of survival analysis statistics. Significant progress has been achieved and further developments are expected in many other areas, including the accelerated failure time model, multivariate failure time data, interval-censored data, dependent censoring, dynamic treatment regimes and causal inference, joint modeling of failure time and longitudinal data, and Baysian methods.  相似文献   

15.
Clustered data frequently arise in biomedical studies, where observations, or subunits, measured within a cluster are associated. The cluster size is said to be informative, if the outcome variable is associated with the number of subunits in a cluster. In most existing work, the informative cluster size issue is handled by marginal approaches based on within-cluster resampling, or cluster-weighted generalized estimating equations. Although these approaches yield consistent estimation of the marginal models, they do not allow estimation of within-cluster associations and are generally inefficient. In this paper, we propose a semiparametric joint model for clustered interval-censored event time data with informative cluster size. We use a random effect to account for the association among event times of the same cluster as well as the association between event times and the cluster size. For estimation, we propose a sieve maximum likelihood approach and devise a computationally-efficient expectation-maximization algorithm for implementation. The estimators are shown to be strongly consistent, with the Euclidean components being asymptotically normal and achieving semiparametric efficiency. Extensive simulation studies are conducted to evaluate the finite-sample performance, efficiency and robustness of the proposed method. We also illustrate our method via application to a motivating periodontal disease dataset.  相似文献   

16.
Tao Sun  Yu Cheng  Ying Ding 《Biometrics》2023,79(3):1713-1725
Copula is a popular method for modeling the dependence among marginal distributions in multivariate censored data. As many copula models are available, it is essential to check if the chosen copula model fits the data well for analysis. Existing approaches to testing the fitness of copula models are mainly for complete or right-censored data. No formal goodness-of-fit (GOF) test exists for interval-censored or recurrent events data. We develop a general GOF test for copula-based survival models using the information ratio (IR) to address this research gap. It can be applied to any copula family with a parametric form, such as the frequently used Archimedean, Gaussian, and D-vine families. The test statistic is easy to calculate, and the test procedure is straightforward to implement. We establish the asymptotic properties of the test statistic. The simulation results show that the proposed test controls the type-I error well and achieves adequate power when the dependence strength is moderate to high. Finally, we apply our method to test various copula models in analyzing multiple real datasets. Our method consistently separates different copula models for all these datasets in terms of model fitness.  相似文献   

17.
Clegg LX  Cai J  Sen PK 《Biometrics》1999,55(3):805-812
In multivariate failure time data analysis, a marginal regression modeling approach is often preferred to avoid assumptions on the dependence structure among correlated failure times. In this paper, a marginal mixed baseline hazards model is introduced. Estimating equations are proposed for the estimation of the marginal hazard ratio parameters. The proposed estimators are shown to be consistent and asymptotically Gaussian with a robust covariance matrix that can be consistently estimated. Simulation studies indicate the adequacy of the proposed methodology for practical sample sizes. The methodology is illustrated with a data set from the Framingham Heart Study.  相似文献   

18.
The semiparametric Cox proportional hazards model is routinely adopted to model time-to-event data. Proportionality is a strong assumption, especially when follow-up time, or study duration, is long. Zeng and Lin (J. R. Stat. Soc., Ser. B, 69:1–30, 2007) proposed a useful generalisation through a family of transformation models which allow hazard ratios to vary over time. In this paper we explore a variety of tests for the need for transformation, arguing that the Cox model is so ubiquitous that it should be considered as the default model, to be discarded only if there is good evidence against the model assumptions. Since fitting an alternative transformation model is more complicated than fitting the Cox model, especially as procedures are not yet incorporated in standard software, we focus mainly on tests which require a Cox fit only. A score test is derived, and we also consider performance of omnibus goodness-of-fit tests based on Schoenfeld residuals. These tests can be extended to compare different transformation models. In addition we explore the consequences of fitting a misspecified Cox model to data generated under a true transformation model. Data on survival of 1043 leukaemia patients are used for illustration.  相似文献   

19.
We propose a joint analysis of recurrent and nonrecurrent event data subject to general types of interval censoring. The proposed analysis allows for general semiparametric models, including the Box–Cox transformation and inverse Box–Cox transformation models for the recurrent and nonrecurrent events, respectively. A frailty variable is used to account for the potential dependence between the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty unspecified. We apply the pseudolikelihood for interval-censored recurrent event data, usually termed as panel count data, and the sufficient likelihood for interval-censored nonrecurrent event data by conditioning on the sufficient statistic for the frailty and using the working assumption of independence over examination times. Large sample theory and a computation procedure for the proposed analysis are established. We illustrate the proposed methodology by a joint analysis of the numbers of occurrences of basal cell carcinoma over time and time to the first recurrence of squamous cell carcinoma based on a skin cancer dataset, as well as a joint analysis of the numbers of adverse events and time to premature withdrawal from study medication based on a scleroderma lung disease dataset.  相似文献   

20.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号