首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Song R  Kosorok MR  Cai J 《Biometrics》2008,64(3):741-750
Summary .   Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study.  相似文献   

2.
Cai T  Huang J  Tian L 《Biometrics》2009,65(2):394-404
Summary .  In the presence of high-dimensional predictors, it is challenging to develop reliable regression models that can be used to accurately predict future outcomes. Further complications arise when the outcome of interest is an event time, which is often not fully observed due to censoring. In this article, we develop robust prediction models for event time outcomes by regularizing the Gehan's estimator for the accelerated failure time (AFT) model ( Tsiatis, 1996 , Annals of Statistics 18, 305–328) with least absolute shrinkage and selection operator (LASSO) penalty. Unlike existing methods based on the inverse probability weighting and the Buckley and James estimator ( Buckley and James, 1979 , Biometrika 66, 429–436), the proposed approach does not require additional assumptions about the censoring and always yields a solution that is convergent. Furthermore, the proposed estimator leads to a stable regression model for prediction even if the AFT model fails to hold. To facilitate the adaptive selection of the tuning parameter, we detail an efficient numerical algorithm for obtaining the entire regularization path. The proposed procedures are applied to a breast cancer dataset to derive a reliable regression model for predicting patient survival based on a set of clinical prognostic factors and gene signatures. Finite sample performances of the procedures are evaluated through a simulation study.  相似文献   

3.
Summary In many instances, a subject can experience both a nonterminal and terminal event where the terminal event (e.g., death) censors the nonterminal event (e.g., relapse) but not vice versa. Typically, the two events are correlated. This situation has been termed semicompeting risks (e.g., Fine, Jiang, and Chappell, 2001 , Biometrika 88, 907–939; Wang, 2003 , Journal of the Royal Statistical Society, Series B 65, 257–273), and analysis has been based on a joint survival function of two event times over the positive quadrant but with observation restricted to the upper wedge. Implicitly, this approach entertains the idea of latent failure times and leads to discussion of a marginal distribution of the nonterminal event that is not grounded in reality. We argue that, similar to models for competing risks, latent failure times should generally be avoided in modeling such data. We note that semicompeting risks have more classically been described as an illness–death model and this formulation avoids any reference to latent times. We consider an illness–death model with shared frailty, which in its most restrictive form is identical to the semicompeting risks model that has been proposed and analyzed, but that allows for many generalizations and the simple incorporation of covariates. Nonparametric maximum likelihood estimation is used for inference and resulting estimates for the correlation parameter are compared with other proposed approaches. Asymptotic properties, simulations studies, and application to a randomized clinical trial in nasopharyngeal cancer evaluate and illustrate the methods. A simple and fast algorithm is developed for its numerical implementation.  相似文献   

4.
Summary .   We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered.  相似文献   

5.
Summary .   Standard prospective logistic regression analysis of case–control data often leads to very imprecise estimates of gene-environment interactions due to small numbers of cases or controls in cells of crossing genotype and exposure. In contrast, under the assumption of gene-environment independence, modern "retrospective" methods, including the "case-only" approach, can estimate the interaction parameters much more precisely, but they can be seriously biased when the underlying assumption of gene-environment independence is violated. In this article, we propose a novel empirical Bayes-type shrinkage estimator to analyze case–control data that can relax the gene-environment independence assumption in a data-adaptive fashion. In the special case, involving a binary gene and a binary exposure, the method leads to an estimator of the interaction log odds ratio parameter in a simple closed form that corresponds to an weighted average of the standard case-only and case–control estimators. We also describe a general approach for deriving the new shrinkage estimator and its variance within the retrospective maximum-likelihood framework developed by Chatterjee and Carroll (2005, Biometrika 92, 399–418). Both simulated and real data examples suggest that the proposed estimator strikes a balance between bias and efficiency depending on the true nature of the gene-environment association and the sample size for a given study.  相似文献   

6.
Summary .  In this article, we consider the setting where the event of interest can occur repeatedly for the same subject (i.e., a recurrent event; e.g., hospitalization) and may be stopped permanently by a terminating event (e.g., death). Among the different ways to model recurrent/terminal event data, the marginal mean (i.e., averaging over the survival distribution) is of primary interest from a public health or health economics perspective. Often, the difference between treatment-specific recurrent event means will not be constant over time, particularly when treatment-specific differences in survival exist. In such cases, it makes more sense to quantify treatment effect based on the cumulative difference in the recurrent event means, as opposed to the instantaneous difference in the rates. We propose a method that compares treatments by separately estimating the survival probabilities and recurrent event rates given survival, then integrating to get the mean number of events. The proposed method combines an additive model for the conditional recurrent event rate and a proportional hazards model for the terminating event hazard. The treatment effects on survival and on recurrent event rate among survivors are estimated in constructing our measure and explain the mechanism generating the difference under study. The example that motivates this research is the repeated occurrence of hospitalization among kidney transplant recipients, where the effect of expanded criteria donor (ECD) compared to non-ECD kidney transplantation on the mean number of hospitalizations is of interest.  相似文献   

7.
Recurrent event data arise in longitudinal follow‐up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram‐positive and non‐Gram‐positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter‐recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two‐stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two‐stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study.  相似文献   

8.
In longitudinal studies of disease, patients may experience several events through a follow‐up period. In these studies, the sequentially ordered events are often of interest and lead to problems that have received much attention recently. Issues of interest include the estimation of bivariate survival, marginal distributions, and the conditional distribution of gap times. In this work, we consider the estimation of the survival function conditional to a previous event. Different nonparametric approaches will be considered for estimating these quantities, all based on the Kaplan–Meier estimator of the survival function. We explore the finite sample behavior of the estimators through simulations. The different methods proposed in this article are applied to a dataset from a German Breast Cancer Study. The methods are used to obtain predictors for the conditional survival probabilities as well as to study the influence of recurrence in overall survival.  相似文献   

9.
In this article, we propose a class of semiparametric transformation rate models for recurrent event data subject to right censoring and potentially stopped by a terminating event (e.g., death). These transformation models include both additive rates model and proportional rates model as special cases. Respecting the property that no recurrent events can occur after the terminating event, we model the conditional recurrent event rate given survival. Weighted estimating equations are constructed to estimate the regression coefficients and baseline rate function. In particular, the baseline rate function is approximated by wavelet function. Asymptotic properties of the proposed estimators are derived and a data-dependent criterion is proposed for selecting the most suitable transformation. Simulation studies show that the proposed estimators perform well for practical sample sizes. The proposed methods are used in two real-data examples: a randomized trial of rhDNase and a community trial of vitamin A.  相似文献   

10.
Many biological or medical experiments have as their goal to estimate the survival function of a specified population of subjects when the time to the specified event may be censored due to loss to follow-up, the occurrence of another event that precludes the occurrence of the event of interest, or the study being terminated before the event of interest occurs. This paper suggests an improvement of the Kaplan-Meier product-limit estimator when the censoring mechanism is random. The proposed estimator treats the uncensored observations nonparametrically and uses a parametric model only for the censored observations. One version of this proposed estimator always has a smaller bias and mean squared error than the product-limit estimator. An example estimating the survival function of patients enrolled in the Ohio State University Bone Marrow Transplant Program is presented.  相似文献   

11.
In many clinical trials and evaluations using medical care administrative databases it is of interest to estimate not only the survival time of a given treatment modality but also the total associated cost. The most widely used estimator for data subject to censoring is the Kaplan-Meier (KM) or product-limit (PL) estimator. The optimality properties of this estimator applied to time-to-event data (consistency, etc.) under the assumptions of random censorship have been established. However, whenever the relationship between cost and survival time includes an error term to account for random differences among patients' costs, the dependency between cumulative treatment cost at the time of censoring and at the survival time results in KM giving biased estimates. A similar phenomenon has previously been noted in the context of estimating quality-adjusted survival time. We propose an estimator for mean cost which exploits the underlying relationship between total treatment cost and survival time. The proposed method utilizes either parametric or nonparametric regression to estimate this relationship and is consistent when this relationship is consistently estimated. We then present simulation results which illustrate the gain in finite-sample efficiency when compared with another recently proposed estimator. The methods are then applied to the estimation of mean cost for two studies where right-censoring was present. The first is the heart failure clinical trial Studies of Left Ventricular Dysfunction (SOLVD). The second is a Health Maintenance Organization (HMO) database study of the cost of ulcer treatment.  相似文献   

12.
Summary .   The Cox hazards model ( Cox, 1972 , Journal of the Royal Statistical Society, Series B 34, 187–220) for survival data is routinely used in many applied fields, sometimes, however, with too little emphasis on the fit of the model. A useful alternative to the Cox model is the Aalen additive hazards model ( Aalen, 1980 , in Lecture Notes in Statistics-2 , 1–25) that can easily accommodate time changing covariate effects. It is of interest to decide which of the two models that are most appropriate to apply in a given application. This is a nontrivial problem as these two classes of models are nonnested except only for special cases. In this article we explore the Mizon–Richard encompassing test for this particular problem. It turns out that it corresponds to fitting of the Aalen model to the martingale residuals obtained from the Cox regression analysis. We also consider a variant of this method, which relates to the proportional excess model ( Martinussen and Scheike, 2002 , Biometrika 89, 283–298). Large sample properties of the suggested methods under the two rival models are derived. The finite-sample properties of the proposed procedures are assessed through a simulation study. The methods are further applied to the well-known primary biliary cirrhosis data set.  相似文献   

13.
In follow‐up studies, the disease event time can be subject to left truncation and right censoring. Furthermore, medical advancements have made it possible for patients to be cured of certain types of diseases. In this article, we consider a semiparametric mixture cure model for the regression analysis of left‐truncated and right‐censored data. The model combines a logistic regression for the probability of event occurrence with the class of transformation models for the time of occurrence. We investigate two techniques for estimating model parameters. The first approach is based on martingale estimating equations (EEs). The second approach is based on the conditional likelihood function given truncation variables. The asymptotic properties of both proposed estimators are established. Simulation studies indicate that the conditional maximum‐likelihood estimator (cMLE) performs well while the estimator based on EEs is very unstable even though it is shown to be consistent. This is a special and intriguing phenomenon for the EE approach under cure model. We provide insights into this issue and find that the EE approach can be improved significantly by assigning appropriate weights to the censored observations in the EEs. This finding is useful in overcoming the instability of the EE approach in some more complicated situations, where the likelihood approach is not feasible. We illustrate the proposed estimation procedures by analyzing the age at onset of the occiput‐wall distance event for patients with ankylosing spondylitis.  相似文献   

14.
Jiang H  Fine JP  Chappell R 《Biometrics》2005,61(2):567-575
Studies of chronic life-threatening diseases often involve both mortality and morbidity. In observational studies, the data may also be subject to administrative left truncation and right censoring. Because mortality and morbidity may be correlated and mortality may censor morbidity, the Lynden-Bell estimator for left-truncated and right-censored data may be biased for estimating the marginal survival function of the non-terminal event. We propose a semiparametric estimator for this survival function based on a joint model for the two time-to-event variables, which utilizes the gamma frailty specification in the region of the observable data. First, we develop a novel estimator for the gamma frailty parameter under left truncation. Using this estimator, we then derive a closed-form estimator for the marginal distribution of the non-terminal event. The large sample properties of the estimators are established via asymptotic theory. The methodology performs well with moderate sample sizes, both in simulations and in an analysis of data from a diabetes registry.  相似文献   

15.
Summary .  We consider variable selection in the Cox regression model ( Cox, 1975 ,  Biometrika   362, 269–276) with covariates missing at random. We investigate the smoothly clipped absolute deviation penalty and adaptive least absolute shrinkage and selection operator (LASSO) penalty, and propose a unified model selection and estimation procedure. A computationally attractive algorithm is developed, which simultaneously optimizes the penalized likelihood function and penalty parameters. We also optimize a model selection criterion, called the   IC Q    statistic ( Ibrahim, Zhu, and Tang, 2008 ,  Journal of the American Statistical Association   103, 1648–1658), to estimate the penalty parameters and show that it consistently selects all important covariates. Simulations are performed to evaluate the finite sample performance of the penalty estimates. Also, two lung cancer data sets are analyzed to demonstrate the proposed methodology.  相似文献   

16.
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions.  相似文献   

17.
Stare J  Perme MP  Henderson R 《Biometrics》2011,67(3):750-759
Summary There is no shortage of proposed measures of prognostic value of survival models in the statistical literature. They come under different names, including explained variation, correlation, explained randomness, and information gain, but their goal is common: to define something analogous to the coefficient of determination R2 in linear regression. None however have been uniformly accepted, none have been extended to general event history data, including recurrent events, and many cannot incorporate time‐varying effects or covariates. We present here a measure specifically tailored for use with general dynamic event history regression models. The measure is applicable and interpretable in discrete or continuous time; with tied data or otherwise; with time‐varying, time‐fixed, or dynamic covariates; with time‐varying or time‐constant effects; with single or multiple event times; with parametric or semiparametric models; and under general independent censoring/observation. For single‐event survival data with neither censoring nor time dependency it reduces to the concordance index. We give expressions for its population value and the variance of the estimator and explore its use in simulations and applications. A web link to R software is provided.  相似文献   

18.
J S Lin  L J Wei 《Biometrics》1992,48(3):679-681
In this note we consider the problem of drawing inference about the regression parameters in a linear model with survival data. A simple procedure based on the Buckley-James (1979, Biometrika 66, 429-436) estimating equation is proposed and illustrated with an example.  相似文献   

19.
Kang S  Cai J 《Biometrics》2009,65(2):405-414
Summary .  A retrospective dental study was conducted to evaluate the degree to which pulpal involvement affects tooth survival. Due to the clustering of teeth, the survival times within each subject could be correlated and thus the conventional method for the case–control studies cannot be directly applied. In this article, we propose a marginal model approach for this type of correlated case–control within cohort data. Weighted estimating equations are proposed for the estimation of the regression parameters. Different types of weights are also considered for improving the efficiency. Asymptotic properties of the proposed estimators are investigated and their finite sample properties are assessed via simulations studies. The proposed method is applied to the aforementioned dental study.  相似文献   

20.
Zhao H  Tsiatis AA 《Biometrics》1999,55(4):1101-1107
Quality of life is an important aspect in evaluation of clinical trials of chronic diseases, such as cancer and AIDS. Quality-adjusted survival analysis is a method that combines both the quantity and quality of a patient's life into one single measure. In this paper, we discuss the efficiency of weighted estimators for the distribution of quality-adjusted survival time. Using the general representation theorem for missing data processes, we are able to derive an estimator that is more efficient than the one proposed in Zhao and Tsiatis (1997, Biometrika 84, 339-348). Simulation experiments are conducted to assess the small sample properties of this estimator and to compare it with the semiparametric efficiency bound. The value of this estimator is demonstrated from an application of the method to a data set obtained from a breast cancer clinical trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号