首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recurrent event data arise in longitudinal follow‐up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram‐positive and non‐Gram‐positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter‐recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two‐stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two‐stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study.  相似文献   

2.
Here we consider a competing risks model where the two risks of interest are not independent. The dependence is due to the additive effect of an independent contaminating risk on two initially independent risks. The problem is identifiable when the three risks fllow independent exponential distributions and also when the two initial risks follow proportional hazards model. Procedures are suggested for estimation and testing hypotheses regarding the parameters of the three exponentials in the first can and the constant of proportionality in the second case, when the information available consists of the times to death and the causes of death of the individuals.  相似文献   

3.
4.
Zhiguo Li  Peter Gilbert  Bin Nan 《Biometrics》2008,64(4):1247-1255
Summary Grouped failure time data arise often in HIV studies. In a recent preventive HIV vaccine efficacy trial, immune responses generated by the vaccine were measured from a case–cohort sample of vaccine recipients, who were subsequently evaluated for the study endpoint of HIV infection at prespecified follow‐up visits. Gilbert et al. (2005, Journal of Infectious Diseases 191 , 666–677) and Forthal et al. (2007, Journal of Immunology 178, 6596–6603) analyzed the association between the immune responses and HIV incidence with a Cox proportional hazards model, treating the HIV infection diagnosis time as a right‐censored random variable. The data, however, are of the form of grouped failure time data with case–cohort covariate sampling, and we propose an inverse selection probability‐weighted likelihood method for fitting the Cox model to these data. The method allows covariates to be time dependent, and uses multiple imputation to accommodate covariate data that are missing at random. We establish asymptotic properties of the proposed estimators, and present simulation results showing their good finite sample performance. We apply the method to the HIV vaccine trial data, showing that higher antibody levels are associated with a lower hazard of HIV infection.  相似文献   

5.
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions.  相似文献   

6.
Mahé C  Chevret S 《Biometrics》1999,55(4):1078-1084
Multivariate failure time data are frequently encountered in longitudinal studies when subjects may experience several events or when there is a grouping of individuals into a cluster. To take into account the dependence of the failure times within the unit (the individual or the cluster) as well as censoring, two multivariate generalizations of the Cox proportional hazards model are commonly used. The marginal hazard model is used when the purpose is to estimate mean regression parameters, while the frailty model is retained when the purpose is to assess the degree of dependence within the unit. We propose a new approach based on the combination of the two aforementioned models to estimate both these quantities. This two-step estimation procedure is quicker and more simple to implement than the EM algorithm used in frailty models estimation. Simulation results are provided to illustrate robustness, consistency, and large-sample properties of estimators. Finally, this method is exemplified on a diabetic retinopathy study in order to assess the effect of photocoagulation in delaying the onset of blindness as well as the dependence between the two eyes blindness times of a patient.  相似文献   

7.
8.
9.
Wei Pan 《Biometrics》2001,57(4):1245-1250
Sun, Liao, and Pagano (1999) proposed an interesting estimating equation approach to Cox regression with doubly censored data. Here we point out that a modification of their proposal leads to a multiple imputation approach, where the double censoring is reduced to single censoring by imputing for the censored initiating times. For each imputed data set one can take advantage of many existing techniques and software for singly censored data. Under the general framework of multiple imputation, the proposed method is simple to implement and can accommodate modeling issues such as model checking, which has not been adequately discussed previously in the literature for doubly censored data. Here we illustrate our method with an application to a formal goodness-of-fit test and a graphical check for the proportional hazards model for doubly censored data. We reanalyze a well-known AIDS data set.  相似文献   

10.
One of factor analysis techniques, viz. the principal components method, and the proportional hazards regression model (Cox, 1972) are applied in this work to study the significance of various factors characterizing the patient, the disease, and the method of treatment in the survival. The application of these methods to analysis of survival data for cervical cancer patients has shown, in particular, the tumor growth rate to be the crucial factor in distribution of the patients survival time and to be even more important than the therapy characteristics.  相似文献   

11.
Tian L  Lagakos S 《Biometrics》2006,62(3):821-828
We develop methods for assessing the association between a binary time-dependent covariate process and a failure time endpoint when the former is observed only at a single time point and the latter is right censored, and when the observations are subject to truncation and competing causes of failure. Using a proportional hazards model for the effect of the covariate process on the failure time of interest, we develop an approach utilizing EM algorithm and profile likelihood for estimating the relative risk parameter and cause-specific hazards for failure. The methods are extended to account for other covariates that can influence the time-dependent covariate process and cause-specific risks of failure. We illustrate the methods with data from a recent study on the association between loss of hepatitis B e antigen and the development of hepatocellular carcinoma in a population of chronic carriers of hepatitis B.  相似文献   

12.
13.
The functional response is a key element in predator–prey models as well as in food chains and food webs. Classical models consider it as a function of prey abundance only. However, many mechanisms can lead to predator dependence, and there is increasing evidence for the importance of this dependence. Identification of the mathematical form of the functional response from real data is therefore a challenging task. In this paper we apply model-fitting to test if typical ecological predator–prey time series data, which contain both observation error and process error, can give some information about the form of the functional response. Working with artificial data (for which the functional response is known) we will show that with moderate noise levels, identification of the model that generated the data is possible. However, the noise levels prevailing in real ecological time-series can give rise to wrong identifications. We will also discuss the quality of parameter estimation by fitting differential equations to such time-series.  相似文献   

14.
15.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   

16.
17.
End-stage renal disease (commonly referred to as renal failure) is of increasing concern in the United States and many countries worldwide. Incidence rates have increased, while the supply of donor organs has not kept pace with the demand. Although renal transplantation has generally been shown to be superior to dialysis with respect to mortality, very little research has been directed towards comparing transplant and wait-list patients with respect to morbidity. Using national data from the Scientific Registry of Transplant Recipients, we compare transplant and wait-list hospitalization rates. Hospitalizations are subject to two levels of dependence. In addition to the dependence among within-patient events, patients are also clustered by listing center. We propose two marginal methods to analyze such clustered recurrent event data; the first model postulates a common baseline event rate, while the second features cluster-specific baseline rates. Our results indicate that kidney transplantation offers a significant decrease in hospitalization, but that the effect is negated by a waiting time (until transplant) of more than 2 years. Moreover, graft failure (GF) results in a significant increase in the hospitalization rate which is greatest in the first month post-GF, but remains significantly elevated up to 4 years later. We also compare results from the proposed models to those based on a frailty model, with the various methods compared and contrasted.  相似文献   

18.
Summary The nested case–control design is a relatively new type of observational study whereby a case–control approach is employed within an established cohort. In this design, we observe cases and controls longitudinally by sampling all cases whenever they occur but controls at certain time points. Controls can be obtained at time points randomly scheduled or prefixed for operational convenience. This design with longitudinal observations is efficient in terms of cost and duration, especially when the disease is rare and the assessment of exposure levels is difficult. In our design, we propose sequential sampling methods and study both (group) sequential testing and estimation methods so that the study can be stopped as soon as the stopping rule is satisfied. To make such a longitudinal sampling more efficient in terms of both numbers of subjects and replications, we propose applying sequential sampling methods to subjects and replications, simultaneously, until the information criterion is fulfilled. This simultaneous sequential sampling on subjects and replicates is more flexible for practitioners designing their sampling schemes, and is different from the classical approaches used in longitudinal studies. We newly define the σ‐field to accommodate our proposed sampling scheme, which contains mixtures of independent and correlated observations, and prove the asymptotic optimality of sequential estimation based on the martingale theories. We also prove that the independent increment structure is retained so that the group sequential method is applicable. Finally, we present results by employing sequential estimation and group sequential testing on both simulated data and real data on children's diarrhea.  相似文献   

19.
Liu L  Huang X  O'Quigley J 《Biometrics》2008,64(3):950-958
Summary .   In longitudinal observational studies, repeated measures are often taken at informative observation times. Also, there may exist a dependent terminal event such as death that stops the follow-up. For example, patients in poorer health are more likely to seek medical treatment and their medical cost for each visit tends to be higher. They are also subject to a higher mortality rate. In this article, we propose a random effects model of repeated measures in the presence of both informative observation times and a dependent terminal event. Three submodels are used, respectively, for (1) the intensity of recurrent observation times, (2) the amount of repeated measure at each observation time, and (3) the hazard of death. Correlated random effects are incorporated to join the three submodels. The estimation can be conveniently accomplished by Gaussian quadrature techniques, e.g., SAS Proc NLMIXED . An analysis of the cost-accrual process of chronic heart failure patients from the clinical data repository at the University of Virginia Health System is presented to illustrate the proposed method.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号