首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
On semiparametric transformation cure models   总被引:4,自引:0,他引:4  
Lu  Wenbin; Ying  Zhiliang 《Biometrika》2004,91(2):331-343
  相似文献   

2.
Mahé C  Chevret S 《Biometrics》1999,55(4):1078-1084
Multivariate failure time data are frequently encountered in longitudinal studies when subjects may experience several events or when there is a grouping of individuals into a cluster. To take into account the dependence of the failure times within the unit (the individual or the cluster) as well as censoring, two multivariate generalizations of the Cox proportional hazards model are commonly used. The marginal hazard model is used when the purpose is to estimate mean regression parameters, while the frailty model is retained when the purpose is to assess the degree of dependence within the unit. We propose a new approach based on the combination of the two aforementioned models to estimate both these quantities. This two-step estimation procedure is quicker and more simple to implement than the EM algorithm used in frailty models estimation. Simulation results are provided to illustrate robustness, consistency, and large-sample properties of estimators. Finally, this method is exemplified on a diabetic retinopathy study in order to assess the effect of photocoagulation in delaying the onset of blindness as well as the dependence between the two eyes blindness times of a patient.  相似文献   

3.
Dahlberg SE  Wang M 《Biometrics》2007,63(4):1237-1244
We propose a semiparametric method for the analysis of masked-cause failure data that are also subject to a cure. We present estimators for the failure time distribution, the cure rate, and the covariate effect on each of these, assuming a proportional hazards cure model for the time to event of interest and we use the expectation-maximization algorithm to conduct the likelihood maximization. The method is applied to data from a breast cancer clinical trial.  相似文献   

4.
5.
Shared frailty models for recurrent events and a terminal event   总被引:1,自引:0,他引:1  
Liu L  Wolfe RA  Huang X 《Biometrics》2004,60(3):747-756
There has been an increasing interest in the analysis of recurrent event data (Cook and Lawless, 2002, Statistical Methods in Medical Research 11, 141-166). In many situations, a terminating event such as death can happen during the follow-up period to preclude further occurrence of the recurrent events. Furthermore, the death time may be dependent on the recurrent event history. In this article we consider frailty proportional hazards models for the recurrent and terminal event processes. The dependence is modeled by conditioning on a shared frailty that is included in both hazard functions. Covariate effects can be taken into account in the model as well. Maximum likelihood estimation and inference are carried out through a Monte Carlo EM algorithm with Metropolis-Hastings sampler in the E-step. An analysis of hospitalization and death data for waitlisted dialysis patients is presented to illustrate the proposed methods. Methods to check the validity of the proposed model are also demonstrated. This model avoids the difficulties encountered in alternative approaches which attempt to specify a dependent joint distribution with marginal proportional hazards and yields an estimate of the degree of dependence.  相似文献   

6.
Case-cohort designs and analysis for clustered failure time data   总被引:1,自引:0,他引:1  
Lu SE  Shih JH 《Biometrics》2006,62(4):1138-1148
Case-cohort design is an efficient and economical design to study risk factors for infrequent disease in a large cohort. It involves the collection of covariate data from all failures ascertained throughout the entire cohort, and from the members of a random subcohort selected at the onset of follow-up. In the literature, the case-cohort design has been extensively studied, but was exclusively considered for univariate failure time data. In this article, we propose case-cohort designs adapted to multivariate failure time data. An estimation procedure with the independence working model approach is used to estimate the regression parameters in the marginal proportional hazards model, where the correlation structure between individuals within a cluster is left unspecified. Statistical properties of the proposed estimators are developed. The performance of the proposed estimators and comparisons of statistical efficiencies are investigated with simulation studies. A data example from the Translating Research into Action for Diabetes (TRIAD) study is used to illustrate the proposed methodology.  相似文献   

7.
8.
Additive hazards regression with current status data   总被引:5,自引:0,他引:5  
  相似文献   

9.
Semiparametric analysis of the additive risk model   总被引:11,自引:0,他引:11  
LIN  D. Y.; YING  ZHILIANG 《Biometrika》1994,81(1):61-71
  相似文献   

10.
Semiparametric regression analysis for clustered failure time data   总被引:1,自引:0,他引:1  
Cai  T.; Wei  L. J.; Wilcox  M. 《Biometrika》2000,87(4):867-878
  相似文献   

11.
12.
On the linear transformation model for censored data   总被引:1,自引:0,他引:1  
FINE  J. P.; YING  Z.; WEI  L. G. 《Biometrika》1998,85(4):980-986
  相似文献   

13.
14.
15.
Testing the proportional odds model under random censoring   总被引:1,自引:0,他引:1  
  相似文献   

16.
Wang H  Zhao H 《Biometrics》2006,62(2):570-575
With medical costs escalating over recent years, cost analysis is being conducted more and more to assess economic impact of new treatment options. An incremental cost-effectiveness ratio (ICER) is a measure that assesses the additional cost for a new treatment for each additional unit of effectiveness, such as saving 1 year of life. In this article, we consider cost-effectiveness analysis for new treatments evaluated in a randomized clinical trial setting with staggered entries. In particular, the censoring times are different for cost and survival data. We propose a method for estimating the ICER and obtaining its confidence interval when differential censoring exists. Simulation experiments are conducted to evaluate our proposed method. We also apply our methods to a clinical trial example comparing the cost-effectiveness of implanted defibrillators with conventional therapy for individuals with reduced left ventricular function after myocardial infarction.  相似文献   

17.
Brannath W  Mehta CR  Posch M 《Biometrics》2009,65(2):539-546
Summary .  We provide a method for obtaining confidence intervals, point estimates, and p-values for the primary effect size parameter at the end of a two-arm group sequential clinical trial in which adaptive changes have been implemented along the way. The method is based on applying the adaptive hypothesis testing procedure of Müller and Schäfer (2001, Biometrics 57, 886–891) to a sequence of dual tests derived from the stage-wise adjusted confidence interval of Tsiatis, Rosner, and Mehta (1984, Biometrics 40, 797–803). In the nonadaptive setting this confidence interval is known to provide exact coverage. In the adaptive setting exact coverage is guaranteed provided the adaptation takes place at the penultimate stage. In general, however, all that can be claimed theoretically is that the coverage is guaranteed to be conservative. Nevertheless, extensive simulation experiments, supported by an empirical characterization of the conditional error function, demonstrate convincingly that for all practical purposes the coverage is exact and the point estimate is median unbiased. No procedure has previously been available for producing confidence intervals and point estimates with these desirable properties in an adaptive group sequential setting. The methodology is illustrated by an application to a clinical trial of deep brain stimulation for Parkinson's disease.  相似文献   

18.
Peng Y  Dear KB 《Biometrics》2000,56(1):237-243
Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.  相似文献   

19.
Liu M  Lu W  Shao Y 《Biometrics》2006,62(4):1053-1061
Interval mapping using normal mixture models has been an important tool for analyzing quantitative traits in experimental organisms. When the primary phenotype is time-to-event, it is natural to use survival models such as Cox's proportional hazards model instead of normal mixtures to model the phenotype distribution. An extra challenge for modeling time-to-event data is that the underlying population may consist of susceptible and nonsusceptible subjects. In this article, we propose a semiparametric proportional hazards mixture cure model which allows missing covariates. We discuss applications to quantitative trait loci (QTL) mapping when the primary trait is time-to-event from a population of mixed susceptibility. This model can be used to characterize QTL effects on both susceptibility and time-to-event distribution, and to estimate QTL location. The model can naturally incorporate covariate effects of other risk factors. Maximum likelihood estimates for the parameters in the model as well as their corresponding variance estimates can be obtained numerically using an EM-type algorithm. The proposed methods are assessed by simulations under practical settings and illustrated using a real data set containing survival times of mice after infection with Listeria monocytogenes. An extension to multiple intervals is also discussed.  相似文献   

20.
MOTIVATION: Implementation and development of statistical methods for high-dimensional data often require high-dimensional Monte Carlo simulations. Simulations are used to assess performance, evaluate robustness, and in some cases for implementation of algorithms. But simulation in high dimensions is often very complex, cumbersome and slow. As a result, performance evaluations are often limited, robustness minimally investigated and dissemination impeded by implementation challenges. This article presents a method for converting complex, slow high-dimensional Monte Carlo simulations into simpler, faster lower dimensional simulations. RESULTS: We implement the method by converting a previous Monte Carlo algorithm into this novel Monte Carlo, which we call AROHIL Monte Carlo. AROHIL Monte Carlo is shown to exactly or closely match pure Monte Carlo results in a number of examples. It is shown that computing time can be reduced by several orders of magnitude. The confidence bound method implemented using AROHIL outperforms the pure Monte Carlo method. Finally, the utility of the method is shown by application to a number of real microarray datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号