首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Song R  Karon JM  White E  Goldbaum G 《Biometrics》2006,62(3):838-846
The analysis of length-biased data has been mostly limited to the interarrival interval of a renewal process covering a specific time point. Motivated by a surveillance problem, we consider a more general situation where this time point is random and related to a specific event, for example, status change or onset of a disease. We also consider the problem when additional information is available on whether the event intervals (interarrival intervals covering the random event) end within or after a random time period (which we call a window period) following the random event. Under the assumptions that the occurrence rate of the random event is low and the renewal process is independent of the random event, we provide formulae for the estimation of the distribution of interarrival times based on the observed event intervals. Procedures for testing the required assumptions are also furnished. We apply our results to human immunodeficiency virus (HIV) test data from public test sites in Seattle, Washington, where the random event is HIV infection and the window period is from the onset of HIV infection to the time at which a less sensitive HIV test becomes positive. Results show that the estimator of the intertest interval length distribution from event intervals ending within the window period is less biased than the estimator from all event intervals; the latter estimator is affected by right truncation. Finally, we discuss possible applications to estimating HIV incidence and analyzing length-biased samples with right or left truncated data.  相似文献   

2.
We consider the estimation of a nonparametric smooth function of some event time in a semiparametric mixed effects model from repeatedly measured data when the event time is subject to right censoring. The within-subject correlation is captured by both cross-sectional and time-dependent random effects, where the latter is modeled by a nonhomogeneous Ornstein–Uhlenbeck stochastic process. When the censoring probability depends on other variables in the model, which often happens in practice, the event time data are not missing completely at random. Hence, the complete case analysis by eliminating all the censored observations may yield biased estimates of the regression parameters including the smooth function of the event time, and is less efficient. To remedy, we derive the likelihood function for the observed data by modeling the event time distribution given other covariates. We propose a two-stage pseudo-likelihood approach for the estimation of model parameters by first plugging an estimator of the conditional event time distribution into the likelihood and then maximizing the resulting pseudo-likelihood function. Empirical evaluation shows that the proposed method yields negligible biases while significantly reduces the estimation variability. This research is motivated by the project of hormone profile estimation around age at the final menstrual period for the cohort of women in the Michigan Bone Health and Metabolism Study.  相似文献   

3.
In observational cohort studies with complex sampling schemes, truncation arises when the time to event of interest is observed only when it falls below or exceeds another random time, that is, the truncation time. In more complex settings, observation may require a particular ordering of event times; we refer to this as sequential truncation. Estimators of the event time distribution have been developed for simple left-truncated or right-truncated data. However, these estimators may be inconsistent under sequential truncation. We propose nonparametric and semiparametric maximum likelihood estimators for the distribution of the event time of interest in the presence of sequential truncation, under two truncation models. We show the equivalence of an inverse probability weighted estimator and a product limit estimator under one of these models. We study the large sample properties of the proposed estimators and derive their asymptotic variance estimators. We evaluate the proposed methods through simulation studies and apply the methods to an Alzheimer's disease study. We have developed an R package, seqTrun , for implementation of our method.  相似文献   

4.
We derive the nonparametric maximum likelihood estimate (NPMLE) of the cumulative incidence functions for competing risks survival data subject to interval censoring and truncation. Since the cumulative incidence function NPMLEs give rise to an estimate of the survival distribution which can be undefined over a potentially larger set of regions than the NPMLE of the survival function obtained ignoring failure type, we consider an alternative pseudolikelihood estimator. The methods are then applied to data from a cohort of injecting drug users in Thailand susceptible to infection from HIV-1 subtypes B and E.  相似文献   

5.
We consider the problem of jointly modeling survival time and longitudinal data subject to measurement error. The survival times are modeled through the proportional hazards model and a random effects model is assumed for the longitudinal covariate process. Under this framework, we propose an approximate nonparametric corrected-score estimator for the parameter, which describes the association between the time-to-event and the longitudinal covariate. The term nonparametric refers to the fact that assumptions regarding the distribution of the random effects and that of the measurement error are unnecessary. The finite sample size performance of the approximate nonparametric corrected-score estimator is examined through simulation studies and its asymptotic properties are also developed. Furthermore, the proposed estimator and some existing estimators are applied to real data from an AIDS clinical trial.  相似文献   

6.
Joly P  Commenges D 《Biometrics》1999,55(3):887-890
We consider the estimation of the intensity and survival functions for a continuous time progressive three-state semi-Markov model with intermittently observed data. The estimator of the intensity function is defined nonparametrically as the maximum of a penalized likelihood. We thus obtain smooth estimates of the intensity and survival functions. This approach can accommodate complex observation schemes such as truncation and interval censoring. The method is illustrated with a study of hemophiliacs infected by HIV. The intensity functions and the cumulative distribution functions for the time to infection and for the time to AIDS are estimated. Covariates can easily be incorporated into the model.  相似文献   

7.
The additive hazards model specifies the effect of covariates on the hazard in an additive way, in contrast to the popular Cox model, in which it is multiplicative. As the non-parametric model, additive hazards offer a very flexible way of modeling time-varying covariate effects. It is most commonly estimated by ordinary least squares. In this paper, we consider the case where covariates are bounded, and derive the maximum likelihood estimator under the constraint that the hazard is non-negative for all covariate values in their domain. We show that the maximum likelihood estimator may be obtained by separately maximizing the log-likelihood contribution of each event time point, and we show that the maximizing problem is equivalent to fitting a series of Poisson regression models with an identity link under non-negativity constraints. We derive an analytic solution to the maximum likelihood estimator. We contrast the maximum likelihood estimator with the ordinary least-squares estimator in a simulation study and show that the maximum likelihood estimator has smaller mean squared error than the ordinary least-squares estimator. An illustration with data on patients with carcinoma of the oropharynx is provided.  相似文献   

8.
Frydman H  Szarek M 《Biometrics》2009,65(1):143-151
Summary .  In many clinical trials patients are intermittently assessed for the transition to an intermediate state, such as occurrence of a disease-related nonfatal event, and death. Estimation of the distribution of nonfatal event free survival time, that is, the time to the first occurrence of the nonfatal event or death, is the primary focus of the data analysis. The difficulty with this estimation is that the intermittent assessment of patients results in two forms of incompleteness: the times of occurrence of nonfatal events are interval censored and, when a nonfatal event does not occur by the time of the last assessment, a patient's nonfatal event status is not known from the time of the last assessment until the end of follow-up for death. We consider both forms of incompleteness within the framework of an "illness–death" model. We develop nonparametric maximum likelihood (ML) estimation in an "illness–death" model from interval-censored observations with missing status of intermediate transition. We show that the ML estimators are self-consistent and propose an algorithm for obtaining them. This work thus provides new methodology for the analysis of incomplete data that arise from clinical trials. We apply this methodology to the data from a recently reported cancer clinical trial ( Bonner et al., 2006 , New England Journal of Medicine 354, 567–578) and compare our estimation results with those obtained using a Food and Drug Administration recommended convention.  相似文献   

9.
In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow‐up. The status of the secondary event serves as a time‐varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time‐varying covariates. While information on a typical time‐varying covariate is missing for entire follow‐up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval‐censored. One may view interval‐censored covariate of the secondary event status as missing time‐varying covariates, yet missingness is partial since partial information is provided throughout the follow‐up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval‐censored covariates in the Cox proportional hazards model, we propose an available‐data estimator, a doubly robust‐type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study.  相似文献   

10.
Recurrent event data arise in longitudinal follow‐up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram‐positive and non‐Gram‐positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter‐recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two‐stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two‐stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study.  相似文献   

11.
Jiang H  Fine JP  Chappell R 《Biometrics》2005,61(2):567-575
Studies of chronic life-threatening diseases often involve both mortality and morbidity. In observational studies, the data may also be subject to administrative left truncation and right censoring. Because mortality and morbidity may be correlated and mortality may censor morbidity, the Lynden-Bell estimator for left-truncated and right-censored data may be biased for estimating the marginal survival function of the non-terminal event. We propose a semiparametric estimator for this survival function based on a joint model for the two time-to-event variables, which utilizes the gamma frailty specification in the region of the observable data. First, we develop a novel estimator for the gamma frailty parameter under left truncation. Using this estimator, we then derive a closed-form estimator for the marginal distribution of the non-terminal event. The large sample properties of the estimators are established via asymptotic theory. The methodology performs well with moderate sample sizes, both in simulations and in an analysis of data from a diabetes registry.  相似文献   

12.
Clinical trials are often planned with high uncertainty about the variance of the primary outcome variable. A poor estimate of the variance, however, may lead to an over‐ or underpowered study. In the internal pilot study design, the sample variance is calculated at an interim step and the sample size can be adjusted if necessary. The available recalculation procedures use the data of those patients for sample size recalculation that have already completed the study. In this article, we consider a variance estimator that takes into account both the data at the endpoint and at an intermediate point of the treatment phase. We derive asymptotic properties of this estimator and the relating sample size recalculation procedure. In a simulation study, the performance of the proposed approach is evaluated and compared with the procedure that uses only long‐term data. Simulation results demonstrate that the sample size resulting from the proposed procedure shows in general a smaller variability. At the same time, the Type I error rate is not inflated and the achieved power is close to the desired value.  相似文献   

13.
We consider a conceptual correspondence between the missing data setting, and joint modeling of longitudinal and time‐to‐event outcomes. Based on this, we formulate an extended shared random effects joint model. Based on this, we provide a characterization of missing at random, which is in line with that in the missing data setting. The ideas are illustrated using data from a study on liver cirrhosis, contrasting the new framework with conventional joint models.  相似文献   

14.
Mixed case interval‐censored data arise when the event of interest is known only to occur within an interval induced by a sequence of random examination times. Such data are commonly encountered in disease research with longitudinal follow‐up. Furthermore, the medical treatment has progressed over the last decade with an increasing proportion of patients being cured for many types of diseases. Thus, interest has grown in cure models for survival data which hypothesize a certain proportion of subjects in the population are not expected to experience the events of interest. In this article, we consider a two‐component mixture cure model for regression analysis of mixed case interval‐censored data. The first component is a logistic regression model that describes the cure rate, and the second component is a semiparametric transformation model that describes the distribution of event time for the uncured subjects. We propose semiparametric maximum likelihood estimation for the considered model. We develop an EM type algorithm for obtaining the semiparametric maximum likelihood estimators (SPMLE) of regression parameters and establish their consistency, efficiency, and asymptotic normality. Extensive simulation studies indicate that the SPMLE performs satisfactorily in a wide variety of settings. The proposed method is illustrated by the analysis of the hypobaric decompression sickness data from National Aeronautics and Space Administration.  相似文献   

15.
Nonparametric estimation in a cure model with random cure times   总被引:4,自引:0,他引:4  
Acute respiratory distress syndrome (ARDS) is a life-threatening acute condition that sometimes follows pneumonia or surgery. Patients who recover and leave the hospital are considered to have been cured at the time they leave the hospital. These data differ from typical data in which cure is a possibility: death times are not observed for patients who are cured and cure times are observed and vary among patients. Here we apply a competing risks model to these data and show it to be equivalent to a mixture model, the more common approach for cure data. Further, we derive an estimator for the variance of the cumulative incidence function from the competing risks model, and thus for the cure rate, based on elementary calculations. We compare our variance estimator to Gray's (1988, Annals of Statistics 16, 1140-1154) estimator, which is based on counting process theory. We find our estimator to be slightly more accurate in small samples. We apply these results to data from an ARDS clinical trial.  相似文献   

16.
Epidemiologic studies of the short-term effects of ambient particulate matter (PM) on the risk of acute cardiovascular or cerebrovascular events often use data from administrative databases in which only the date of hospitalization is known. A common study design for analyzing such data is the case-crossover design, in which exposure at a time when a patient experiences an event is compared to exposure at times when the patient did not experience an event within a case-control paradigm. However, the time of true event onset may precede hospitalization by hours or days, which can yield attenuated effect estimates. In this article, we consider a marginal likelihood estimator, a regression calibration estimator, and a conditional score estimator, as well as parametric bootstrap versions of each, to correct for this bias. All considered approaches require validation data on the distribution of the delay times. We compare the performance of the approaches in realistic scenarios via simulation, and apply the methods to analyze data from a Boston-area study of the association between ambient air pollution and acute stroke onset. Based on both simulation and the case study, we conclude that a two-stage regression calibration estimator with a parametric bootstrap bias correction is an effective method for correcting bias in health effect estimates arising from delayed onset in a case-crossover study.  相似文献   

17.
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions.  相似文献   

18.
We present methods for causally interpretable meta-analyses that combine information from multiple randomized trials to draw causal inferences for a target population of substantive interest. We consider identifiability conditions, derive implications of the conditions for the law of the observed data, and obtain identification results for transporting causal inferences from a collection of independent randomized trials to a new target population in which experimental data may not be available. We propose an estimator for the potential outcome mean in the target population under each treatment studied in the trials. The estimator uses covariate, treatment, and outcome data from the collection of trials, but only covariate data from the target population sample. We show that it is doubly robust in the sense that it is consistent and asymptotically normal when at least one of the models it relies on is correctly specified. We study the finite sample properties of the estimator in simulation studies and demonstrate its implementation using data from a multicenter randomized trial.  相似文献   

19.
A nonparametric estimator of a joint distribution function F0 of a d‐dimensional random vector with interval‐censored (IC) data is the generalized maximum likelihood estimator (GMLE), where d ≥ 2. The GMLE of F0 with univariate IC data is uniquely defined at each follow‐up time. However, this is no longer true in general with multivariate IC data as demonstrated by a data set from an eye study. How to estimate the survival function and the covariance matrix of the estimator in such a case is a new practical issue in analyzing IC data. We propose a procedure in such a situation and apply it to the data set from the eye study. Our method always results in a GMLE with a nonsingular sample information matrix. We also give a theoretical justification for such a procedure. Extension of our procedure to Cox's regression model is also mentioned.  相似文献   

20.
A cause-specific cumulative incidence function (CIF) is the probability of failure from a specific cause as a function of time. In randomized trials, a difference of cause-specific CIFs (treatment minus control) represents a treatment effect. Cause-specific CIF in each intervention arm can be estimated based on the usual non-parametric Aalen–Johansen estimator which generalizes the Kaplan–Meier estimator of CIF in the presence of competing risks. Under random censoring, asymptotically valid Wald-type confidence intervals (CIs) for a difference of cause-specific CIFs at a specific time point can be constructed using one of the published variance estimators. Unfortunately, these intervals can suffer from substantial under-coverage when the outcome of interest is a rare event, as may be the case for example in the analysis of uncommon adverse events. We propose two new approximate interval estimators for a difference of cause-specific CIFs estimated in the presence of competing risks and random censoring. Theoretical analysis and simulations indicate that the new interval estimators are superior to the Wald CIs in the sense of avoiding substantial under-coverage with rare events, while being equivalent to the Wald CIs asymptotically. In the absence of censoring, one of the two proposed interval estimators reduces to the well-known Agresti–Caffo CI for a difference of two binomial parameters. The new methods can be easily implemented with any software package producing point and variance estimates for the Aalen–Johansen estimator, as illustrated in a real data example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号