首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
In longitudinal studies of disease, patients may experience several events through a follow‐up period. In these studies, the sequentially ordered events are often of interest and lead to problems that have received much attention recently. Issues of interest include the estimation of bivariate survival, marginal distributions, and the conditional distribution of gap times. In this work, we consider the estimation of the survival function conditional to a previous event. Different nonparametric approaches will be considered for estimating these quantities, all based on the Kaplan–Meier estimator of the survival function. We explore the finite sample behavior of the estimators through simulations. The different methods proposed in this article are applied to a dataset from a German Breast Cancer Study. The methods are used to obtain predictors for the conditional survival probabilities as well as to study the influence of recurrence in overall survival.  相似文献   

2.
A time-dependent measure, termed the rate ratio, was proposed to assess the local dependence between two types of recurrent event processes in one-sample settings. However, the one-sample work does not consider modeling the dependence by covariates such as subject characteristics and treatments received. The focus of this paper is to understand how and in what magnitude the covariates influence the dependence strength for bivariate recurrent events. We propose the covariate-adjusted rate ratio, a measure of covariate-adjusted dependence. We propose a semiparametric regression model for jointly modeling the frequency and dependence of bivariate recurrent events: the first level is a proportional rates model for the marginal rates and the second level is a proportional rate ratio model for the dependence structure. We develop a pseudo-partial likelihood to estimate the parameters in the proportional rate ratio model. We establish the asymptotic properties of the estimators and evaluate the finite sample performance via simulation studies. We illustrate the proposed models and methods using a soft tissue sarcoma study that examines the effects of initial treatments on the marginal frequencies of local/distant sarcoma recurrence and the dependence structure between the two types of cancer recurrence.  相似文献   

3.
Sequentially observed survival times are of interest in many studies but there are difficulties in analyzing such data using nonparametric or semiparametric methods. First, when the duration of followup is limited and the times for a given individual are not independent, induced dependent censoring arises for the second and subsequent survival times. Non-identifiability of the marginal survival distributions for second and later times is another issue, since they are observable only if preceding survival times for an individual are uncensored. In addition, in some studies a significant proportion of individuals may never have the first event. Fully parametric models can deal with these features, but robustness is a concern. We introduce a new approach to address these issues. We model the joint distribution of the successive survival times by using copula functions, and provide semiparametric estimation procedures in which copula parameters are estimated without parametric assumptions on the marginal distributions. This provides more robust estimates and checks on the fit of parametric models. The methodology is applied to a motivating example involving relapse and survival following colon cancer treatment.  相似文献   

4.
Huang X  Liu L 《Biometrics》2007,63(2):389-397
Therapy for patients with a recurrent disease focuses on delaying disease recurrence and prolonging survival. A common analysis approach for such data is to estimate the distribution of disease-free survival, that is, the time to the first disease recurrence or death, whichever happens first. However, treating death similarly as disease recurrence may give misleading results. Also considering only the first recurrence and ignoring subsequent ones can result in loss of statistical power. We use a joint frailty model to simultaneously analyze disease recurrences and survival. Separate parameters for disease recurrence and survival are used in the joint model to distinguish treatment effects on these two types of events. The correlation between disease recurrences and survival is taken into account by a shared frailty. The effect of disease recurrence on survival can also be estimated by this model. The EM algorithm is used to fit the model, with Markov chain Monte Carlo simulations in the E-steps. The method is evaluated by simulation studies and illustrated through a study of patients with heart failure. Sensitivity analysis for the parametric assumption of the frailty distribution is assessed by simulations.  相似文献   

5.
Hsieh JJ  Ding AA  Wang W 《Biometrics》2011,67(3):719-729
Summary Recurrent events data are commonly seen in longitudinal follow‐up studies. Dependent censoring often occurs due to death or exclusion from the study related to the disease process. In this article, we assume flexible marginal regression models on the recurrence process and the dependent censoring time without specifying their dependence structure. The proposed model generalizes the approach by Ghosh and Lin (2003, Biometrics 59, 877–885). The technique of artificial censoring provides a way to maintain the homogeneity of the hypothetical error variables under dependent censoring. Here we propose to apply this technique to two Gehan‐type statistics. One considers only order information for pairs whereas the other utilizes additional information of observed censoring times available for recurrence data. A model‐checking procedure is also proposed to assess the adequacy of the fitted model. The proposed estimators have good asymptotic properties. Their finite‐sample performances are examined via simulations. Finally, the proposed methods are applied to analyze the AIDS linked to the intravenous experiences cohort data.  相似文献   

6.
Clegg LX  Cai J  Sen PK 《Biometrics》1999,55(3):805-812
In multivariate failure time data analysis, a marginal regression modeling approach is often preferred to avoid assumptions on the dependence structure among correlated failure times. In this paper, a marginal mixed baseline hazards model is introduced. Estimating equations are proposed for the estimation of the marginal hazard ratio parameters. The proposed estimators are shown to be consistent and asymptotically Gaussian with a robust covariance matrix that can be consistently estimated. Simulation studies indicate the adequacy of the proposed methodology for practical sample sizes. The methodology is illustrated with a data set from the Framingham Heart Study.  相似文献   

7.
Liang Li  Bo Hu  Tom Greene 《Biometrics》2009,65(3):737-745
Summary .  In many longitudinal clinical studies, the level and progression rate of repeatedly measured biomarkers on each subject quantify the severity of the disease and that subject's susceptibility to progression of the disease. It is of scientific and clinical interest to relate such quantities to a later time-to-event clinical endpoint such as patient survival. This is usually done with a shared parameter model. In such models, the longitudinal biomarker data and the survival outcome of each subject are assumed to be conditionally independent given subject-level severity or susceptibility (also called frailty in statistical terms). In this article, we study the case where the conditional distribution of longitudinal data is modeled by a linear mixed-effect model, and the conditional distribution of the survival data is given by a Cox proportional hazard model. We allow unknown regression coefficients and time-dependent covariates in both models. The proposed estimators are maximizers of an exact correction to the joint log likelihood with the frailties eliminated as nuisance parameters, an idea that originated from correction of covariate measurement error in measurement error models. The corrected joint log likelihood is shown to be asymptotically concave and leads to consistent and asymptotically normal estimators. Unlike most published methods for joint modeling, the proposed estimation procedure does not rely on distributional assumptions of the frailties. The proposed method was studied in simulations and applied to a data set from the Hemodialysis Study.  相似文献   

8.
Guo Y  Manatunga AK 《Biometrics》2009,65(1):125-134
Summary .  Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.  相似文献   

9.
In observational cohort studies with complex sampling schemes, truncation arises when the time to event of interest is observed only when it falls below or exceeds another random time, that is, the truncation time. In more complex settings, observation may require a particular ordering of event times; we refer to this as sequential truncation. Estimators of the event time distribution have been developed for simple left-truncated or right-truncated data. However, these estimators may be inconsistent under sequential truncation. We propose nonparametric and semiparametric maximum likelihood estimators for the distribution of the event time of interest in the presence of sequential truncation, under two truncation models. We show the equivalence of an inverse probability weighted estimator and a product limit estimator under one of these models. We study the large sample properties of the proposed estimators and derive their asymptotic variance estimators. We evaluate the proposed methods through simulation studies and apply the methods to an Alzheimer's disease study. We have developed an R package, seqTrun , for implementation of our method.  相似文献   

10.
Ghosh D  Lin DY 《Biometrics》2003,59(4):877-885
Dependent censoring occurs in longitudinal studies of recurrent events when the censoring time depends on the potentially unobserved recurrent event times. To perform regression analysis in this setting, we propose a semiparametric joint model that formulates the marginal distributions of the recurrent event process and dependent censoring time through scale-change models, while leaving the distributional form and dependence structure unspecified. We derive consistent and asymptotically normal estimators for the regression parameters. We also develop graphical and numerical methods for assessing the adequacy of the proposed model. The finite-sample behavior of the new inference procedures is evaluated through simulation studies. An application to recurrent hospitalization data taken from a study of intravenous drug users is provided.  相似文献   

11.
Quantiles, especially the medians, of survival times are often used as summary statistics to compare the survival experiences between different groups. Quantiles are robust against outliers and preferred over the mean. Multivariate failure time data often arise in biomedical research. For example, in clinical trials, each patient in the study may experience multiple events which may be of the same type or distinct types, while in family studies of genetic diseases or litter matched mice studies, failure times for subjects in the same cluster may be correlated. In this article, we propose nonparametric procedures for the estimation of quantiles with multivariate failure time data. We show that the proposed estimators asymptotically follow a multivariate normal distribution. The asymptotic variance‐covariance matrix of the estimated quantiles is estimated based on the kernel smoothing and bootstrap techniques. Simulation results show that the proposed estimators perform well in finite samples. The methods are illustrated with the burn‐wound infection data and the Diabetic Retinopathy Study (DRS) data.  相似文献   

12.
In this article, we develop methods for quantifying center effects with respect to recurrent event data. In the models of interest, center effects are assumed to act multiplicatively on the recurrent event rate function. When the number of centers is large, traditional estimation methods that treat centers as categorical variables have many parameters and are sometimes not feasible to implement, especially with large numbers of distinct recurrent event times. We propose a new estimation method for center effects which avoids including indicator variables for centers. We then show that center effects can be consistently estimated by the center-specific ratio of observed to expected cumulative numbers of events. We also consider the case where the recurrent event sequence can be stopped permanently by a terminating event. Large-sample results are developed for the proposed estimators. We assess the finite-sample properties of the proposed estimators through simulation studies. The methods are then applied to national hospital admissions data for end stage renal disease patients.  相似文献   

13.
Lu Mao 《Biometrics》2023,79(1):61-72
The restricted mean time in favor (RMT-IF) of treatment is a nonparametric effect size for complex life history data. It is defined as the net average time the treated spend in a more favorable state than the untreated over a prespecified time window. It generalizes the familiar restricted mean survival time (RMST) from the two-state life–death model to account for intermediate stages in disease progression. The overall estimand can be additively decomposed into stage-wise effects, with the standard RMST as a component. Alternate expressions of the overall and stage-wise estimands as integrals of the marginal survival functions for a sequence of landmark transitioning events allow them to be easily estimated by plug-in Kaplan–Meier estimators. The dynamic profile of the estimated treatment effects as a function of follow-up time can be visualized using a multilayer, cone-shaped “bouquet plot.” Simulation studies under realistic settings show that the RMT-IF meaningfully and accurately quantifies the treatment effect and outperforms traditional tests on time to the first event in statistical efficiency thanks to its fuller utilization of patient data. The new methods are illustrated on a colon cancer trial with relapse and death as outcomes and a cardiovascular trial with recurrent hospitalizations and death as outcomes. The R-package rmt implements the proposed methodology and is publicly available from the Comprehensive R Archive Network (CRAN).  相似文献   

14.
Complex traits important for humans are often correlated phenotypically and genetically. Joint mapping of quantitative-trait loci (QTLs) for multiple correlated traits plays an important role in unraveling the genetic architecture of complex traits. Compared with single-trait analysis, joint mapping addresses more questions and has advantages for power of QTL detection and precision of parameter estimation. Some statistical methods have been developed to map QTLs underlying multiple traits, most of which are based on maximum-likelihood methods. We develop here a multivariate version of the Bayes methodology for joint mapping of QTLs, using the Markov chain-Monte Carlo (MCMC) algorithm. We adopt a variance-components method to model complex traits in outbred populations (e.g., humans). The method is robust, can deal with an arbitrary number of alleles with arbitrary patterns of gene actions (such as additive and dominant), and allows for multiple phenotype data of various types in the joint analysis (e.g., multiple continuous traits and mixtures of continuous traits and discrete traits). Under a Bayesian framework, parameters--including the number of QTLs--are estimated on the basis of their marginal posterior samples, which are generated through two samplers, the Gibbs sampler and the reversible-jump MCMC. In addition, we calculate the Bayes factor related to each identified QTL, to test coincident linkage versus pleiotropy. The performance of our method is evaluated in simulations with full-sib families. The results show that our proposed Bayesian joint-mapping method performs well for mapping multiple QTLs in situations of either bivariate continuous traits or mixed data types. Compared with the analysis for each trait separately, Bayesian joint mapping improves statistical power, provides stronger evidence of QTL detection, and increases precision in estimation of parameter and QTL position. We also applied the proposed method to a set of real data and detected a coincident linkage responsible for determining bone mineral density and areal bone size of wrist in humans.  相似文献   

15.
Na Cai  Wenbin Lu  Hao Helen Zhang 《Biometrics》2012,68(4):1093-1102
Summary In analysis of longitudinal data, it is not uncommon that observation times of repeated measurements are subject‐specific and correlated with underlying longitudinal outcomes. Taking account of the dependence between observation times and longitudinal outcomes is critical under these situations to assure the validity of statistical inference. In this article, we propose a flexible joint model for longitudinal data analysis in the presence of informative observation times. In particular, the new procedure considers the shared random‐effect model and assumes a time‐varying coefficient for the latent variable, allowing a flexible way of modeling longitudinal outcomes while adjusting their association with observation times. Estimating equations are developed for parameter estimation. We show that the resulting estimators are consistent and asymptotically normal, with variance–covariance matrix that has a closed form and can be consistently estimated by the usual plug‐in method. One additional advantage of the procedure is that it provides a unified framework to test whether the effect of the latent variable is zero, constant, or time‐varying. Simulation studies show that the proposed approach is appropriate for practical use. An application to a bladder cancer data is also given to illustrate the methodology.  相似文献   

16.
Chang SH 《Biometrics》2000,56(1):183-189
A longitudinal study is conducted to compare the process of particular disease between two groups. The process of the disease is monitored according to which of several ordered events occur. In the paper, the sojourn time between two successive events is considered as the outcome of interest. The group effects on the sojourn times of the multiple events are parameterized by scale changes in a semiparametric accelerated failure time model where the dependence structure among the multivariate sojourn times is unspecified. Suppose that the sojourn times are subject to dependent censoring and the censoring times are observed for all subjects. A log-rank-type estimating approach by rescaling the sojourn times and the dependent censoring times into the same distribution is constructed to estimate the group effects and the corresponding estimators are consistent and asymptotically normal. Without the dependent censoring, the independent censoring times in general are not available for the uncensored data. In order to complete the censoring information, pseudo-censoring times are generated from the corresponding nonparametrically estimated survival function in each group, and we can still obtained unbiased estimating functions for the group effects. A real application and a simulation study are conducted to illustrate the proposed methods.  相似文献   

17.
Case-cohort designs and analysis for clustered failure time data   总被引:1,自引:0,他引:1  
Lu SE  Shih JH 《Biometrics》2006,62(4):1138-1148
Case-cohort design is an efficient and economical design to study risk factors for infrequent disease in a large cohort. It involves the collection of covariate data from all failures ascertained throughout the entire cohort, and from the members of a random subcohort selected at the onset of follow-up. In the literature, the case-cohort design has been extensively studied, but was exclusively considered for univariate failure time data. In this article, we propose case-cohort designs adapted to multivariate failure time data. An estimation procedure with the independence working model approach is used to estimate the regression parameters in the marginal proportional hazards model, where the correlation structure between individuals within a cluster is left unspecified. Statistical properties of the proposed estimators are developed. The performance of the proposed estimators and comparisons of statistical efficiencies are investigated with simulation studies. A data example from the Translating Research into Action for Diabetes (TRIAD) study is used to illustrate the proposed methodology.  相似文献   

18.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   

19.
Clustered data frequently arise in biomedical studies, where observations, or subunits, measured within a cluster are associated. The cluster size is said to be informative, if the outcome variable is associated with the number of subunits in a cluster. In most existing work, the informative cluster size issue is handled by marginal approaches based on within-cluster resampling, or cluster-weighted generalized estimating equations. Although these approaches yield consistent estimation of the marginal models, they do not allow estimation of within-cluster associations and are generally inefficient. In this paper, we propose a semiparametric joint model for clustered interval-censored event time data with informative cluster size. We use a random effect to account for the association among event times of the same cluster as well as the association between event times and the cluster size. For estimation, we propose a sieve maximum likelihood approach and devise a computationally-efficient expectation-maximization algorithm for implementation. The estimators are shown to be strongly consistent, with the Euclidean components being asymptotically normal and achieving semiparametric efficiency. Extensive simulation studies are conducted to evaluate the finite-sample performance, efficiency and robustness of the proposed method. We also illustrate our method via application to a motivating periodontal disease dataset.  相似文献   

20.

Interval-censored failure times arise when the status with respect to an event of interest is only determined at intermittent examination times. In settings where there exists a sub-population of individuals who are not susceptible to the event of interest, latent variable models accommodating a mixture of susceptible and nonsusceptible individuals are useful. We consider such models for the analysis of bivariate interval-censored failure time data with a model for bivariate binary susceptibility indicators and a copula model for correlated failure times given joint susceptibility. We develop likelihood, composite likelihood, and estimating function methods for model fitting and inference, and assess asymptotic-relative efficiency and finite sample performance. Extensions dealing with higher-dimensional responses and current status data are also described.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号