首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Most statistical methods for censored survival data assume there is no dependence between the lifetime and censoring mechanisms, an assumption which is often doubtful in practice. In this paper we study a parametric model which allows for dependence in terms of a parameter delta and a bias function B(t, theta). We propose a sensitivity analysis on the estimate of the parameter of interest for small values of delta. This parameter measures the dependence between the lifetime and the censoring mechanisms. Its size can be interpreted in terms of a correlation coefficient between the two mechanisms. A medical example suggests that even a small degree of dependence between the failure and censoring processes can have a noticeable effect on the analysis.  相似文献   

2.
Siannis F 《Biometrics》2004,60(3):704-714
In this article, we explore the use of a parametric model (for analyzing survival data) which is defined to allow sensitivity analysis for the presence of informative censoring. The dependence between the failure and the censoring processes is expressed through a parameter delta and a general bias function B(t, theta). We calculate the expectation of the potential bias due to informative censoring, which is an overall measure of how misleading our results might be if censoring is actually nonignorable. Bounds are also calculated for quantities of interest, e.g., parameter of the distribution of the failure process, which do not depend on the choice of the bias function for fixed delta. An application that relates to systematic lupus erythematosus data illustrates how additional information can result in reducing the uncertainty on estimates of the location parameter. Sensitivity analysis on a relative risk parameter is also explored.  相似文献   

3.
Cure models are used in time-to-event analysis when not all individuals are expected to experience the event of interest, or when the survival of the considered individuals reaches the same level as the general population. These scenarios correspond to a plateau in the survival and relative survival function, respectively. The main parameters of interest in cure models are the proportion of individuals who are cured, termed the cure proportion, and the survival function of the uncured individuals. Although numerous cure models have been proposed in the statistical literature, there is no consensus on how to formulate these. We introduce a general parametric formulation of mixture cure models and a new class of cure models, termed latent cure models, together with a general estimation framework and software, which enable fitting of a wide range of different models. Through simulations, we assess the statistical properties of the models with respect to the cure proportion and the survival of the uncured individuals. Finally, we illustrate the models using survival data on colon cancer, which typically display a plateau in the relative survival. As demonstrated in the simulations, mixture cure models which are not guaranteed to be constant after a finite time point, tend to produce accurate estimates of the cure proportion and the survival of the uncured. However, these models are very unstable in certain cases due to identifiability issues, whereas LC models generally provide stable results at the price of more biased estimates.  相似文献   

4.
Hsieh JJ  Ding AA  Wang W 《Biometrics》2011,67(3):719-729
Summary Recurrent events data are commonly seen in longitudinal follow‐up studies. Dependent censoring often occurs due to death or exclusion from the study related to the disease process. In this article, we assume flexible marginal regression models on the recurrence process and the dependent censoring time without specifying their dependence structure. The proposed model generalizes the approach by Ghosh and Lin (2003, Biometrics 59, 877–885). The technique of artificial censoring provides a way to maintain the homogeneity of the hypothetical error variables under dependent censoring. Here we propose to apply this technique to two Gehan‐type statistics. One considers only order information for pairs whereas the other utilizes additional information of observed censoring times available for recurrence data. A model‐checking procedure is also proposed to assess the adequacy of the fitted model. The proposed estimators have good asymptotic properties. Their finite‐sample performances are examined via simulations. Finally, the proposed methods are applied to analyze the AIDS linked to the intravenous experiences cohort data.  相似文献   

5.
We derive the nonparametric maximum likelihood estimate (NPMLE) of the cumulative incidence functions for competing risks survival data subject to interval censoring and truncation. Since the cumulative incidence function NPMLEs give rise to an estimate of the survival distribution which can be undefined over a potentially larger set of regions than the NPMLE of the survival function obtained ignoring failure type, we consider an alternative pseudolikelihood estimator. The methods are then applied to data from a cohort of injecting drug users in Thailand susceptible to infection from HIV-1 subtypes B and E.  相似文献   

6.
In this study we introduce a likelihood-based method, via the Weibull and piecewise exponential distributions, capable of accommodating the dependence between failure and censoring times. The methodology is developed for the analysis of clustered survival data and it assumes that failure and censoring times are mutually independent conditional on a latent frailty. The dependent censoring mechanism is accounted through the frailty effect and this is accomplished by means of a key parameter accommodating the correlation between failure and censored observations. The full specification of the likelihood in our work simplifies the inference procedures with respect to Huang and Wolfe since it reduces the computation burden of working with the profile likelihood. In addition, the assumptions made for the baseline distributions lead to models with continuous survival functions. In order to carry out inferences, we devise a Monte Carlo EM algorithm. The performance of the proposed models is investigated through a simulation study. Finally, we explore a real application involving patients from the Dialysis Outcomes and Practice Patterns Study observed between 1996 and 2015.  相似文献   

7.
8.
9.
Huang X  Zhang N 《Biometrics》2008,64(4):1090-1099
SUMMARY: In clinical studies, when censoring is caused by competing risks or patient withdrawal, there is always a concern about the validity of treatment effect estimates that are obtained under the assumption of independent censoring. Because dependent censoring is nonidentifiable without additional information, the best we can do is a sensitivity analysis to assess the changes of parameter estimates under different assumptions about the association between failure and censoring. This analysis is especially useful when knowledge about such association is available through literature review or expert opinions. In a regression analysis setting, the consequences of falsely assuming independent censoring on parameter estimates are not clear. Neither the direction nor the magnitude of the potential bias can be easily predicted. We provide an approach to do sensitivity analysis for the widely used Cox proportional hazards models. The joint distribution of the failure and censoring times is assumed to be a function of their marginal distributions. This function is called a copula. Under this assumption, we propose an iteration algorithm to estimate the regression parameters and marginal survival functions. Simulation studies show that this algorithm works well. We apply the proposed sensitivity analysis approach to the data from an AIDS clinical trial in which 27% of the patients withdrew due to toxicity or at the request of the patient or investigator.  相似文献   

10.
Jiang H  Fine JP  Chappell R 《Biometrics》2005,61(2):567-575
Studies of chronic life-threatening diseases often involve both mortality and morbidity. In observational studies, the data may also be subject to administrative left truncation and right censoring. Because mortality and morbidity may be correlated and mortality may censor morbidity, the Lynden-Bell estimator for left-truncated and right-censored data may be biased for estimating the marginal survival function of the non-terminal event. We propose a semiparametric estimator for this survival function based on a joint model for the two time-to-event variables, which utilizes the gamma frailty specification in the region of the observable data. First, we develop a novel estimator for the gamma frailty parameter under left truncation. Using this estimator, we then derive a closed-form estimator for the marginal distribution of the non-terminal event. The large sample properties of the estimators are established via asymptotic theory. The methodology performs well with moderate sample sizes, both in simulations and in an analysis of data from a diabetes registry.  相似文献   

11.
Zhang J  Heitjan DF 《Biometrics》2006,62(4):1260-1268
Right- and interval-censored data are common special cases of coarsened data (Heitjan and Rubin, 1991, Annals of Statistics19, 2244-2253). As with missing data, standard statistical methods that ignore the random nature of the coarsening mechanism may lead to incorrect inferences. We extend a simple sensitivity analysis tool, the index of local sensitivity to nonignorability (Troxel, Ma, and Heitjan, 2004, Statistica Sinica14, 1221-1237), to the evaluation of nonignorability of the coarsening process in the general coarse-data model. By converting this index into a simple graphical display one can easily assess the sensitivity of key inferences to nonignorable coarsening. We illustrate the validity of the method with a simulated example, and apply it to right-censored data from an observational study of cardiac transplantation and to interval-censored data on time to detectable viral load from a clinical trial in HIV disease.  相似文献   

12.
13.
Many long-lived vertebrate species are under threat in the Anthropocene, but their conservation is hampered by a lack of demographic information to assess population long-term viability. When longitudinal studies (e.g., Capture-Mark-Recapture design) are not feasible, the only available data may be cross-sectional, for example, stranding for marine mammals. Survival analysis deals with age at death (i.e., time to event) data and allows to estimate survivorship and hazard rates assuming that the cross-sectional sample is representative. Accommodating a bathtub-shaped hazard, as expected in wild populations, was historically difficult and required specific models. We identified a simple linear regression model with individual frailty that can fit a bathtub-shaped hazard, take into account covariates, allow goodness-of-fit assessments and give accurate estimates of survivorship in realistic settings. We first conducted a Monte Carlo study and simulated age at death data to assess the accuracy of estimates with respect to sample size. Secondly, we applied this framework on a handful of case studies from published studies on marine mammals, a group with many threatened and data-deficient species. We found that our framework is flexible and accurate to estimate survivorship with a sample size of 300 . This approach is promising for obtaining important demographic information on data-poor species.  相似文献   

14.
Huang X  Wolfe RA 《Biometrics》2002,58(3):510-520
To account for the correlation between failure and censoring, we propose a new frailty model for clustered data. In this model, the risk to be censored is affected by the risk of failure. This model allows flexibility in the direction and degree of dependence between failure and censoring. It includes the traditional frailty model as a special case. It allows censoring by some causes to be analyzed as informative while treating censoring by other causes as noninformative. It can also analyze data for competing risks. To fit the model, the EM algorithm is used with Markov chain Monte Carlo simulations in the E-steps. Simulation studies and analysis of data for kidney disease patients are provided. Consequences of incorrectly assuming noninformative censoring are investigated.  相似文献   

15.
Sternberg MR  Satten GA 《Biometrics》1999,55(2):514-522
Chain-of-events data are longitudinal observations on a succession of events that can only occur in a prescribed order. One goal in an analysis of this type of data is to determine the distribution of times between the successive events. This is difficult when individuals are observed periodically rather than continuously because the event times are then interval censored. Chain-of-events data may also be subject to truncation when individuals can only be observed if a certain event in the chain (e.g., the final event) has occurred. We provide a nonparametric approach to estimate the distributions of times between successive events in discrete time for data such as these under the semi-Markov assumption that the times between events are independent. This method uses a self-consistency algorithm that extends Turnbull's algorithm (1976, Journal of the Royal Statistical Society, Series B 38, 290-295). The quantities required to carry out the algorithm can be calculated recursively for improved computational efficiency. Two examples using data from studies involving HIV disease are used to illustrate our methods.  相似文献   

16.
Current status data arise due to only one feasible examination such that the failure time of interest occurs before or after the examination time. If the examination time is intrinsically related to the failure time of interest, the examination time is referred to as an informative censoring time. Such data may occur in many fields, for example, epidemiological surveys and animal carcinogenicity experiments. To avoid severely misleading inferences resulted from ignoring informative censoring, we propose a class of semiparametric transformation models with log‐normal frailty for current status data with informative censoring. A shared frailty is used to account for the correlation between the failure time and censoring time. The expectation‐maximization (EM) algorithm combining a sieve method for approximating an infinite‐dimensional parameter is employed to estimate all parameters. To investigate finite sample properties of the proposed method, simulation studies are conducted, and a data set from a rodent tumorigenicity experiment is analyzed for illustrative purposes.  相似文献   

17.
Zhang M  Schaubel DE 《Biometrics》2011,67(3):740-749
In epidemiologic studies of time to an event, mean lifetime is often of direct interest. We propose methods to estimate group- (e.g., treatment-) specific differences in restricted mean lifetime for studies where treatment is not randomized and lifetimes are subject to both dependent and independent censoring. The proposed methods may be viewed as a hybrid of two general approaches to accounting for confounders. Specifically, treatment-specific proportional hazards models are employed to account for baseline covariates, while inverse probability of censoring weighting is used to accommodate time-dependent predictors of censoring. The average causal effect is then obtained by averaging over differences in fitted values based on the proportional hazards models. Large-sample properties of the proposed estimators are derived and simulation studies are conducted to assess their finite-sample applicability. We apply the proposed methods to liver wait list mortality data from the Scientific Registry of Transplant Recipients.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号