共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Chyong‐Mei Chen Ya‐Wen Chuang Pao‐Sheng Shen 《Biometrical journal. Biometrische Zeitschrift》2015,57(2):215-233
Recurrent event data arise in longitudinal follow‐up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram‐positive and non‐Gram‐positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter‐recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two‐stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two‐stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study. 相似文献
3.
End-stage renal disease (commonly referred to as renal failure) is of increasing concern in the United States and many countries worldwide. Incidence rates have increased, while the supply of donor organs has not kept pace with the demand. Although renal transplantation has generally been shown to be superior to dialysis with respect to mortality, very little research has been directed towards comparing transplant and wait-list patients with respect to morbidity. Using national data from the Scientific Registry of Transplant Recipients, we compare transplant and wait-list hospitalization rates. Hospitalizations are subject to two levels of dependence. In addition to the dependence among within-patient events, patients are also clustered by listing center. We propose two marginal methods to analyze such clustered recurrent event data; the first model postulates a common baseline event rate, while the second features cluster-specific baseline rates. Our results indicate that kidney transplantation offers a significant decrease in hospitalization, but that the effect is negated by a waiting time (until transplant) of more than 2 years. Moreover, graft failure (GF) results in a significant increase in the hospitalization rate which is greatest in the first month post-GF, but remains significantly elevated up to 4 years later. We also compare results from the proposed models to those based on a frailty model, with the various methods compared and contrasted. 相似文献
4.
Xingqiu Zhao Li Liu Yanyan Liu Wei Xu 《Biometrical journal. Biometrische Zeitschrift》2012,54(5):585-599
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions. 相似文献
5.
Interval‐censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. In some settings, chronic disease processes may resolve, and individuals will cease to be at risk of events at the time of disease resolution. We develop an expectation‐maximization algorithm for fitting a dynamic mover‐stayer model to interval‐censored recurrent event data under a Markov model with a piecewise‐constant baseline rate function given a latent process. The model is motivated by settings in which the event times and the resolution time of the disease process are unobserved. The likelihood and algorithm are shown to yield estimators with small empirical bias in simulation studies. Data are analyzed on the cumulative number of damaged joints in patients with psoriatic arthritis where individuals experience disease remission. 相似文献
6.
Cohort case-control design is an efficient and economical design to study risk factors for disease incidence or mortality in a large cohort. In the last few decades, a variety of cohort case-control designs have been developed and theoretically justified. These designs have been exclusively applied to the analysis of univariate failure-time data. In this work, a cohort case-control design adapted to multivariate failure-time data is developed. A risk set sampling method is proposed to sample controls from nonfailures in a large cohort for each case matched by failure time. This method leads to a pseudolikelihood approach for the estimation of regression parameters in the marginal proportional hazards model (Cox, 1972, Journal of the Royal Statistical Society, Series B 34, 187-220), where the correlation structure between individuals within a cluster is left unspecified. The performance of the proposed estimator is demonstrated by simulation studies. A bootstrap method is proposed for inferential purposes. This methodology is illustrated by a data example from a child vitamin A supplementation trial in Nepal (Nepal Nutrition Intervention Project-Sarlahi, or NNIPS). 相似文献
7.
8.
Donald Hedeker Stephen H. C. du Toit Hakan Demirtas Robert D. Gibbons 《Biometrics》2018,74(1):354-361
9.
Summary Many time‐to‐event studies are complicated by the presence of competing risks and by nesting of individuals within a cluster, such as patients in the same center in a multicenter study. Several methods have been proposed for modeling the cumulative incidence function with independent observations. However, when subjects are clustered, one needs to account for the presence of a cluster effect either through frailty modeling of the hazard or subdistribution hazard, or by adjusting for the within‐cluster correlation in a marginal model. We propose a method for modeling the marginal cumulative incidence function directly. We compute leave‐one‐out pseudo‐observations from the cumulative incidence function at several time points. These are used in a generalized estimating equation to model the marginal cumulative incidence curve, and obtain consistent estimates of the model parameters. A sandwich variance estimator is derived to adjust for the within‐cluster correlation. The method is easy to implement using standard software once the pseudovalues are obtained, and is a generalization of several existing models. Simulation studies show that the method works well to adjust the SE for the within‐cluster correlation. We illustrate the method on a dataset looking at outcomes after bone marrow transplantation. 相似文献
10.
11.
12.
13.
Marginal structural models (MSMs) have been proposed for estimating a treatment's effect, in the presence of time‐dependent confounding. We aimed to evaluate the performance of the Cox MSM in the presence of missing data and to explore methods to adjust for missingness. We simulated data with a continuous time‐dependent confounder and a binary treatment. We explored two classes of missing data: (i) missed visits, which resemble clinical cohort studies; (ii) missing confounder's values, which correspond to interval cohort studies. Missing data were generated under various mechanisms. In the first class, the source of the bias was the extreme treatment weights. Truncation or normalization improved estimation. Therefore, particular attention must be paid to the distribution of weights, and truncation or normalization should be applied if extreme weights are noticed. In the second case, bias was due to the misspecification of the treatment model. Last observation carried forward (LOCF), multiple imputation (MI), and inverse probability of missingness weighting (IPMW) were used to correct for the missingness. We found that alternatives, especially the IPMW method, perform better than the classic LOCF method. Nevertheless, in situations with high marker's variance and rarely recorded measurements none of the examined method adequately corrected the bias. 相似文献
14.
Lianming Wang Christopher S. McMahan Michael G. Hudgens Zaina P. Qureshi 《Biometrics》2016,72(1):222-231
15.
16.
Marginal models and conditional mixed-effects models are commonly used for clustered binary data. However, regression parameters and predictions in nonlinear mixed-effects models usually do not have a direct marginal interpretation, because the conditional functional form does not carry over to the margin. Because both marginal and conditional inferences are of interest, a unified approach is attractive. To this end, we investigate a parameterization of generalized linear mixed models with a structured random-intercept distribution that matches the conditional and marginal shapes. We model the marginal mean of response distribution and select the distribution of the random intercept to produce the match and also to model covariate-dependent random effects. We discuss the relation between this approach and some existing models and compare the approaches on two datasets. 相似文献
17.
This article considers clinical trials in which the efficacy measure is taken from several sites within each patient, such as the alveolar bone height of the tooth sites, or bone mineral densities of the lumbar spine sites. Since usually only a small portion of these sites will exhibit changes, the conventional method using per patient average gives a diluted result due to excessive no changes in the data. Different methods have been proposed for this type of data in the case where the observations are mutually independent. This includes the popular \"two-part model\" (Lachenbruch, 2001, Statistics in Medicine 20, 1215-1234; 2002, Statistical Methods in Medical Research 11, 297-302), which is related to the \"composite approach\" for discrete and continuous data in Shih and Quan (1997, Statistics in Medicine16, 1225-1239; 2001, Statistica Sinica 11, 53-62). In this article, we model the data with excessive zeros (no changes) in clustered data using a mixture of distributions, and taking into account possible measurement errors. This mixture model includes the two-part model as a special case when one component of the mixture degenerates. 相似文献
18.
Jianguo Sun Hee‐Jeong Lim Xingqiu Zhao 《Biometrical journal. Biometrische Zeitschrift》2004,46(5):503-511
19.
Marginalized models (Heagerty, 1999, Biometrics 55, 688-698) permit likelihood-based inference when interest lies in marginal regression models for longitudinal binary response data. Two such models are the marginalized transition and marginalized latent variable models. The former captures within-subject serial dependence among repeated measurements with transition model terms while the latter assumes exchangeable or nondiminishing response dependence using random intercepts. In this article, we extend the class of marginalized models by proposing a single unifying model that describes both serial and long-range dependence. This model will be particularly useful in longitudinal analyses with a moderate to large number of repeated measurements per subject, where both serial and exchangeable forms of response correlation can be identified. We describe maximum likelihood and Bayesian approaches toward parameter estimation and inference, and we study the large sample operating characteristics under two types of dependence model misspecification. Data from the Madras Longitudinal Schizophrenia Study (Thara et al., 1994, Acta Psychiatrica Scandinavica 90, 329-336) are analyzed. 相似文献
20.
Alexander Volovics Piet A. Den Van Brandt 《Biometrical journal. Biometrische Zeitschrift》1997,39(2):195-214
Case-cohort and nested case-control sampling methods have recently been introduced as a means of reducing cost in large cohort studies. The asymptotic distribution theory results for relative rate estimation based on Cox type partial or pseudolikelihoods for case-cohort and nested case-control studies have been accounted for. However, many researchers use (stratified) frequency table methods for a first or primary summarization of the most important evidence on exposure-disease or dose-response relationships, i.e. the classical Mantel-Haenszel analyses, trend tests and tests for heterogeneity of relative rates. These can be followed by exponential failure time regression methods on grouped or individual data to model relationships between several factors and response. In this paper we present the adaptations needed to use these methods with case-cohort designs, illustrating their use with data from a recent case-cohort study on the relationship between diet, life-style and cancer. We assume a very general setup allowing piecewise constant failure rates, possible recurrent events per individual, independent censoring and left truncation. 相似文献