首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Chen YQ  Jewell NP  Lei X  Cheng SC 《Biometrics》2005,61(1):170-178
A mean residual life function is the average remaining life of a surviving subject, as it varies with time. The proportional mean residual life model was proposed by Oakes and Dasu (1990, Biometrika77, 409-410) in regression analysis to study its association with related covariates in absence of censoring. In this article, we develop some semiparametric estimation procedures to take censoring into account. The proposed methodology is evaluated via simulation studies, and further applied to a clinical trial of chemotherapy in postoperative radiotherapy of lung cancer patients.  相似文献   

2.
Estimation in a Cox proportional hazards cure model   总被引:7,自引:0,他引:7  
Sy JP  Taylor JM 《Biometrics》2000,56(1):227-236
Some failure time data come from a population that consists of some subjects who are susceptible to and others who are nonsusceptible to the event of interest. The data typically have heavy censoring at the end of the follow-up period, and a standard survival analysis would not always be appropriate. In such situations where there is good scientific or empirical evidence of a nonsusceptible population, the mixture or cure model can be used (Farewell, 1982, Biometrics 38, 1041-1046). It assumes a binary distribution to model the incidence probability and a parametric failure time distribution to model the latency. Kuk and Chen (1992, Biometrika 79, 531-541) extended the model by using Cox's proportional hazards regression for the latency. We develop maximum likelihood techniques for the joint estimation of the incidence and latency regression parameters in this model using the nonparametric form of the likelihood and an EM algorithm. A zero-tail constraint is used to reduce the near nonidentifiability of the problem. The inverse of the observed information matrix is used to compute the standard errors. A simulation study shows that the methods are competitive to the parametric methods under ideal conditions and are generally better when censoring from loss to follow-up is heavy. The methods are applied to a data set of tonsil cancer patients treated with radiation therapy.  相似文献   

3.
G Heller  J S Simonoff 《Biometrics》1992,48(1):101-115
Although the analysis of censored survival data using the proportional hazards and linear regression models is common, there has been little work examining the ability of these estimators to predict time to failure. This is unfortunate, since a predictive plot illustrating the relationship between time to failure and a continuous covariate can be far more informative regarding the risk associated with the covariate than a Kaplan-Meier plot obtained by discretizing the variable. In this paper the predictive power of the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-202) proportional hazards estimator and the Buckley-James (1979, Biometrika 66, 429-436) censored regression estimator are compared. Using computer simulations and heuristic arguments, it is shown that the choice of method depends on the censoring proportion, strength of the regression, the form of the censoring distribution, and the form of the failure distribution. Several examples are provided to illustrate the usefulness of the methods.  相似文献   

4.
M C Wu  K K Lan 《Biometrics》1992,48(3):765-779
The spending function approach proposed by Lan and DeMets (1983, Biometrika 70, 659-663) for sequential monitoring of clinical trials is applied to situations where comparison of changes in a continuous response variable between two groups is the primary concern. Death, loss to follow-up, and missed visits could cause follow-up measurements to be right-censored or missing for some participants. Furthermore, the probability of being censored may be dependent on the parameter value of the response variable (informative censoring). We propose to compare treatment effects by comparing areas under the expected response change curves between the two groups. When the response curves are linear as a function of time in both groups, this comparison is equivalent to comparing the rates of change in the response variable. Covariances of the sequential test statistics are derived. Conditions for having independent increments are presented. For studies designed to evaluate long-term treatment effects, spending functions obtained by shifting the usual spending functions (Kim and DeMets, 1987, Biometrika 74, 149-154) to the right and then rescaling to the remaining interval are also proposed. Such a shifted spending function is applied to the monitoring plan for the Lung Health Study (Anthonisen, 1989, American Review of Respiratory Diseases 140, 871-872).  相似文献   

5.
He W  Lawless JF 《Biometrics》2003,59(4):837-848
This article presents methodology for multivariate proportional hazards (PH) regression models. The methods employ flexible piecewise constant or spline specifications for baseline hazard functions in either marginal or conditional PH models, along with assumptions about the association among lifetimes. Because the models are parametric, ordinary maximum likelihood can be applied; it is able to deal easily with such data features as interval censoring or sequentially observed lifetimes, unlike existing semiparametric methods. A bivariate Clayton model (1978, Biometrika 65, 141-151) is used to illustrate the approach taken. Because a parametric assumption about association is made, efficiency and robustness comparisons are made between estimation based on the bivariate Clayton model and "working independence" methods that specify only marginal distributions for each lifetime variable.  相似文献   

6.
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.  相似文献   

7.
Tests for monotone mean residual life, using randomly censored data   总被引:1,自引:1,他引:0  
At any age the mean residual life function gives the expected remaining life at that age. Reliabilists and biometricians have found it useful to categorize failure distributions by the monotonicity properties of the mean residual life function. Hollander and Proschan (1975, Biometrika 62, 585-593) have derived tests of the null hypothesis that the underlying failure distribution is exponential, versus the alternative that it has a monotone mean residual life function. These tests are based on a complete sample. Often, however, data are incomplete because of withdrawals from the study and because of survivors at the time the data are analyzed. In this paper we generalize the Hollander-Proschan tests to accommodate randomly censored data. The efficiency loss due to the presence of censoring is also investigated.  相似文献   

8.
León LF  Tsai CL 《Biometrics》2004,60(1):75-84
We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.  相似文献   

9.
Lakhal L  Rivest LP  Abdous B 《Biometrics》2008,64(1):180-188
Summary .   In many follow-up studies, patients are subject to concurrent events. In this article, we consider semicompeting risks data as defined by Fine, Jiang, and Chappell (2001, Biometrika 88 , 907–919) where one event is censored by the other but not vice versa. The proposed model involves marginal survival functions for the two events and a parametric family of copulas for their dependency. This article suggests a general method for estimating the dependence parameter when the dependency is modeled with an Archimedean copula. It uses the copula-graphic estimator of Zheng and Klein (1995, Biometrika 82 , 127–138) for estimating the survival function of the nonterminal event, subject to dependent censoring. Asymptotic properties of these estimators are derived. Simulations show that the new methods work well with finite samples. The copula-graphic estimator is shown to be more accurate than the estimator proposed by Fine et al. (2001) ; its performances are similar to those of the self-consistent estimator of Jiang, Fine, Kosorok, and Chappell (2005, Scandinavian Journal of Statistics 33, 1–20). The analysis of a data set, emphasizing the estimation of characteristics of the observable region, is presented as an illustration.  相似文献   

10.
Heinze G  Schemper M 《Biometrics》2001,57(1):114-119
The phenomenon of monotone likelihood is observed in the fitting process of a Cox model if the likelihood converges to a finite value while at least one parameter estimate diverges to +/- infinity. Monotone likelihood primarily occurs in small samples with substantial censoring of survival times and several highly predictive covariates. Previous options to deal with monotone likelihood have been unsatisfactory. The solution we suggest is an adaptation of a procedure by Firth (1993, Biometrika 80, 27-38) originally developed to reduce the bias of maximum likelihood estimates. This procedure produces finite parameter estimates by means of penalized maximum likelihood estimation. Corresponding Wald-type tests and confidence intervals are available, but it is shown that penalized likelihood ratio tests and profile penalized likelihood confidence intervals are often preferable. An empirical study of the suggested procedures confirms satisfactory performance of both estimation and inference. The advantage of the procedure over previous options of analysis is finally exemplified in the analysis of a breast cancer study.  相似文献   

11.
Cai T  Huang J  Tian L 《Biometrics》2009,65(2):394-404
Summary .  In the presence of high-dimensional predictors, it is challenging to develop reliable regression models that can be used to accurately predict future outcomes. Further complications arise when the outcome of interest is an event time, which is often not fully observed due to censoring. In this article, we develop robust prediction models for event time outcomes by regularizing the Gehan's estimator for the accelerated failure time (AFT) model ( Tsiatis, 1996 , Annals of Statistics 18, 305–328) with least absolute shrinkage and selection operator (LASSO) penalty. Unlike existing methods based on the inverse probability weighting and the Buckley and James estimator ( Buckley and James, 1979 , Biometrika 66, 429–436), the proposed approach does not require additional assumptions about the censoring and always yields a solution that is convergent. Furthermore, the proposed estimator leads to a stable regression model for prediction even if the AFT model fails to hold. To facilitate the adaptive selection of the tuning parameter, we detail an efficient numerical algorithm for obtaining the entire regularization path. The proposed procedures are applied to a breast cancer dataset to derive a reliable regression model for predicting patient survival based on a set of clinical prognostic factors and gene signatures. Finite sample performances of the procedures are evaluated through a simulation study.  相似文献   

12.
Wei WH  Su JS 《Biometrics》1999,55(4):1295-1299
Deletion diagnostics are developed for identifying observations that influence the estimates of regression parameters and the mixture parameter in the families of relative risk functions for failure time data. The diagnostic for the regression parameters is a generalization of Cain and Lange's (1984, Biometrics 40, 493-499) measure of individual influence. The generalizations of martingale residuals, Schoenfeld's partial residuals (1982, Biometrika 69, 239-241), and score residuals by Therneau, Grambsch, and Fleming (1990, Biometrika 77, 147-160) are also obtained. The influence of some observations on regression parameters can be drastically modified as the mixture parameter changes, even for a very small change. In addition, adding or deleting some observations might result in choosing different models. The diagnostics are applied to a family proposed by Guerrero and Johnson (1982, Biometrika 69, 309-314). One illustrative example is presented.  相似文献   

13.
J J Gart  J Nam 《Biometrics》1988,44(2):323-338
Various methods for finding confidence intervals for the ratio of binomial parameters are reviewed and evaluated numerically. It is found that the method based on likelihood scores (Koopman, 1984, Biometrics 40, 513-517; Miettinen and Nurminen, 1985, Statistics in Medicine 4, 213-226) performs best in achieving the nominal confidence coefficient, but it may distribute the tail probabilities quite disparately. Using general theory of Bartlett (1953, Biometrika 40, 306-317; 1955, Biometrika 42, 201-203), we correct this method for asymptotic skewness. Following Gart (1985, Biometrika 72, 673-677), we extend this correction to the case of estimating the common ratio in a series of two-by-two tables. Computing algorithms are given and applied to numerical examples. Parallel methods for the odds ratio and the ratio of Poisson parameters are noted.  相似文献   

14.
J O'Quigley  F Pessione 《Biometrics》1991,47(1):101-115
We introduce a test for the equality of two survival distributions against the specific alternative of crossing hazards. Although this kind of alternative is somewhat rare, designing a test specifically aimed at detecting such departures from the null hypothesis in this direction leads to powerful procedures, upon which we can call in those few cases where such departures are suspected. Furthermore, the proposed test and an approximate version of the test are seen to suffer only moderate losses in power, when compared with their optimal counterparts, should the alternative be one of proportional hazards. Our interest in the problem is motivated by clinical studies on the role of acute graft versus host disease as a risk factor in leukemic children and we discuss the analysis of this study in detail. The model we use in this work is a special case of the one introduced by Anderson and Senthilselvan (1982. Applied Statistics 31, 44-51). We propose overcoming an inferential problem stemming from their model by using the methods of Davies (1977, Biometrika 64, 247-254; 1987, Biometrika 74, 33-43) backed up by resampling techniques. We also look at an approach relying directly on resampling techniques. The distributional aspects of this approach under the null hypothesis are interesting but, practically, its behaviour is such that its use cannot be generally recommended. Outlines of the necessary asymptotic theory are presented and for this we use the tools of martingale theory.  相似文献   

15.
O'Brien's logit-rank procedure (1978, Biometrics 34, 243-250) is shown to arise as a score test based on the partial likelihood for a proportional hazards model provided the covariate structure is suitably defined. Within this framework the asymptotic properties claimed by O'Brien can be readily deduced and can be seen to be valid under a more general model of censoring than that considered in his paper. More important, perhaps, it is now possible to make a more natural and interpretable generalization to the multiple regression problem than that suggested by O'Brien as a means of accounting for the effects of nuisance covariates. This can be achieved either by modelling or stratification. The proportional hazards framework is also helpful in that it enables us to recognize the logit-rank procedure as being one member of a class of contending procedures. One consequence of this is that the relative efficiencies of any two procedures can be readily evaluated using the results of Lagakos (1988, Biometrika 75, 156-160). Our own evaluations suggest that, for non-time-dependent covariates, a simplification of the logit-rank procedure, leading to considerable reduction in computational complexity, is to be preferred to the procedure originally outlined by O'Brien.  相似文献   

16.
D Zelterman  C T Le 《Biometrics》1991,47(2):751-755
We examine several tests of homogeneity of the odds ratio in the analysis of 2 x 2 tables arising from epidemiologic 1:R matched case-control studies. The T4 and T5 statistics proposed by Liang and Self (1985, Biometrika 72, 353-358) are unable to detect obvious inhomogeneity in two numerical examples and in simulation studies. The null hypothesis is rejected by the chi-square statistic of Ejigou and McHugh (1984, Biometrika 71, 408-411) and by a new proposed method whose significance level must be simulated.  相似文献   

17.
Song R  Kosorok MR  Cai J 《Biometrics》2008,64(3):741-750
Summary .   Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study.  相似文献   

18.
Investigators of genetic illnesses are currently employing life-table techniques to estimate the lifetime risk of disease and the age-at-onset distribution. This methodology assumes that onset ages are known for affected individuals and that censoring ages are known for unaffected individuals. We extend these methods to incorporate affected individuals with unknown onset ages and unaffected persons with unknown censoring ages and illustrate how conventional life-table methods can produce seriously biased estimates, particularly of lifetime risk. The methodology is not restricted to genetic illnesses and can be applied to more complex illnesses with unknown etiology. We present an example for Huntington disease, which is generally assumed to be a Mendelian autosomal dominant disease, yielding estimates of lifetime risk of .503 +/- .70 and mean onset age of 47.7 +/- 3.1 years for offspring with a single affected parent. When conventional life-table techniques are employed, these estimates are .238 +/- .032 and 43.2 +/- 2.2.  相似文献   

19.
Proportional hazards model with covariates subject to measurement error.   总被引:1,自引:0,他引:1  
T Nakamura 《Biometrics》1992,48(3):829-838
When covariates of a proportional hazards model are subject to measurement error, the maximum likelihood estimates of regression coefficients based on the partial likelihood are asymptotically biased. Prentice (1982, Biometrika 69, 331-342) presents an example of such bias and suggests a modified partial likelihood. This paper applies the corrected score function method (Nakamura, 1990, Biometrika 77, 127-137) to the proportional hazards model when measurement errors are additive and normally distributed. The result allows a simple correction to the ordinary partial likelihood that yields asymptotically unbiased estimates; the validity of the correction is confirmed via a limited simulation study.  相似文献   

20.
Matsui S 《Biometrics》2004,60(4):965-976
This article develops randomization-based methods for times to repeated events in two-arm randomized trials with noncompliance and dependent censoring. Structural accelerated failure time models are assumed to capture causal effects on repeated event times and dependent censoring time, but the dependence structure among repeated event times and dependent censoring time is unspecified. Artificial censoring techniques to accommodate nonrandom noncompliance and dependent censoring are proposed. Estimation of the acceleration parameters are based on rank-based estimating functions. A simulation study is conducted to evaluate the performance of the developed methods. An illustration of the methods using data from an acute myeloid leukemia trial is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号