首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A competing risk model is developed to accommodate both planned Type I censoring and random withdrawals. MLE's, their properties, confidence regions for parameters and mean lifetimes are obtained for a model regarding random censoring as a competing risk and compared to those obtained for the model in which withdrawals are regarded as random censoring. Estimated net and crude probabilities are calculated and compared for the two models. The model is developed for two competing risks, one following a Weibull distribution and the other a Rayleigh distribution, and random withdrawals following a Weibull distribution.  相似文献   

2.
Regression modeling of semicompeting risks data   总被引:1,自引:0,他引:1  
Peng L  Fine JP 《Biometrics》2007,63(1):96-108
Semicompeting risks data are often encountered in clinical trials with intermediate endpoints subject to dependent censoring from informative dropout. Unlike with competing risks data, dropout may not be dependently censored by the intermediate event. There has recently been increased attention to these data, in particular inferences about the marginal distribution of the intermediate event without covariates. In this article, we incorporate covariates and formulate their effects on the survival function of the intermediate event via a functional regression model. To accommodate informative censoring, a time-dependent copula model is proposed in the observable region of the data which is more flexible than standard parametric copula models for the dependence between the events. The model permits estimation of the marginal distribution under weaker assumptions than in previous work on competing risks data. New nonparametric estimators for the marginal and dependence models are derived from nonlinear estimating equations and are shown to be uniformly consistent and to converge weakly to Gaussian processes. Graphical model checking techniques are presented for the assumed models. Nonparametric tests are developed accordingly, as are inferences for parametric submodels for the time-varying covariate effects and copula parameters. A novel time-varying sensitivity analysis is developed using the estimation procedures. Simulations and an AIDS data analysis demonstrate the practical utility of the methodology.  相似文献   

3.
In many clinical trials and evaluations using medical care administrative databases it is of interest to estimate not only the survival time of a given treatment modality but also the total associated cost. The most widely used estimator for data subject to censoring is the Kaplan-Meier (KM) or product-limit (PL) estimator. The optimality properties of this estimator applied to time-to-event data (consistency, etc.) under the assumptions of random censorship have been established. However, whenever the relationship between cost and survival time includes an error term to account for random differences among patients' costs, the dependency between cumulative treatment cost at the time of censoring and at the survival time results in KM giving biased estimates. A similar phenomenon has previously been noted in the context of estimating quality-adjusted survival time. We propose an estimator for mean cost which exploits the underlying relationship between total treatment cost and survival time. The proposed method utilizes either parametric or nonparametric regression to estimate this relationship and is consistent when this relationship is consistently estimated. We then present simulation results which illustrate the gain in finite-sample efficiency when compared with another recently proposed estimator. The methods are then applied to the estimation of mean cost for two studies where right-censoring was present. The first is the heart failure clinical trial Studies of Left Ventricular Dysfunction (SOLVD). The second is a Health Maintenance Organization (HMO) database study of the cost of ulcer treatment.  相似文献   

4.
A competing risk model, accommodating both Type I censoring and random withdrawals, is expanded to incorporate concomitant information by allowing the parameters of the underlying distributions to be a linear function of two covariates. The model is developed for two competing risks, one following a Weibull distribution and the other a Rayleigh distribution, and random withdrawals following a Weibull distribution. A method is developed for testing the equality of the coefficients for a given covariate for each of the competing risks using MLE'.  相似文献   

5.
This paper reviews a general framework for the modelling of longitudinal data with random measurement times based on marked point processes and presents a worked example. We construct a quite general regression models for longitudinal data, which may in particular include censoring that only depend on the past and outside random variation, and dependencies between measurement times and measurements. The modelling also generalises statistical counting process models. We review a non-parametric Nadarya-Watson kernel estimator of the regression function, and a parametric analysis that is based on a conditional least squares (CLS) criterion. The parametric analysis presented, is a conditional version of the generalised estimation equations of LIANG and ZEGER (1986). We conclude that the usual nonparametric and parametric regression modelling can be applied to this general set-up, with some modifications. The presented framework provides an easily implemented and powerful tool for model building for repeated measurements.  相似文献   

6.
Yi Li  Lu Tian  Lee‐Jen Wei 《Biometrics》2011,67(2):427-435
Summary In a longitudinal study, suppose that the primary endpoint is the time to a specific event. This response variable, however, may be censored by an independent censoring variable or by the occurrence of one of several dependent competing events. For each study subject, a set of baseline covariates is collected. The question is how to construct a reliable prediction rule for the future subject's profile of all competing risks of interest at a specific time point for risk‐benefit decision making. In this article, we propose a two‐stage procedure to make inferences about such subject‐specific profiles. For the first step, we use a parametric model to obtain a univariate risk index score system. We then estimate consistently the average competing risks for subjects who have the same parametric index score via a nonparametric function estimation procedure. We illustrate this new proposal with the data from a randomized clinical trial for evaluating the efficacy of a treatment for prostate cancer. The primary endpoint for this study was the time to prostate cancer death, but had two types of dependent competing events, one from cardiovascular death and the other from death of other causes.  相似文献   

7.

Objective:

We demonstrate the utility of parametric survival analysis. The analysis of longevity as a function of risk factors such as body mass index (BMI; kg/m2), activity levels, and dietary factors is a mainstay of obesity research. Modeling survival through hazard functions, relative risks, or odds of dying with methods such as Cox proportional hazards or logistic regression are the most common approaches and have many advantages. However, they also have disadvantages in terms of the ease of interpretability, especially for non‐statisticians; the need for additional data to convert parameter estimates to estimates of years of life lost (YLL); debates about the appropriate time scale in the model; and an inability to estimate median survival time when the censoring rate is too high.

Design and Methods:

We will conduct parametric survival analyses with multiple distributions, including distributions that are known to be poor fits (Gaussian), as well as a newly discovered “Compressed Gaussian”'' distribution.

Results:

Parametric survival analysis models were able to accurately estimate median survival times in a population‐based data set of 15,703 individuals, even for distributions that were not good fits and the censoring rate was high, due to the central limit theorem.

Conclusions:

Parametric survival models are able to provide more direct answers, and in our analysis of an obesity‐related data set, gave consistent YLL estimates regardless of the distribution used. We recommend increased consideration of parametric survival models in chronic disease and risk factor epidemiology.  相似文献   

8.
Many biological or medical experiments have as their goal to estimate the survival function of a specified population of subjects when the time to the specified event may be censored due to loss to follow-up, the occurrence of another event that precludes the occurrence of the event of interest, or the study being terminated before the event of interest occurs. This paper suggests an improvement of the Kaplan-Meier product-limit estimator when the censoring mechanism is random. The proposed estimator treats the uncensored observations nonparametrically and uses a parametric model only for the censored observations. One version of this proposed estimator always has a smaller bias and mean squared error than the product-limit estimator. An example estimating the survival function of patients enrolled in the Ohio State University Bone Marrow Transplant Program is presented.  相似文献   

9.
A cause-specific cumulative incidence function (CIF) is the probability of failure from a specific cause as a function of time. In randomized trials, a difference of cause-specific CIFs (treatment minus control) represents a treatment effect. Cause-specific CIF in each intervention arm can be estimated based on the usual non-parametric Aalen–Johansen estimator which generalizes the Kaplan–Meier estimator of CIF in the presence of competing risks. Under random censoring, asymptotically valid Wald-type confidence intervals (CIs) for a difference of cause-specific CIFs at a specific time point can be constructed using one of the published variance estimators. Unfortunately, these intervals can suffer from substantial under-coverage when the outcome of interest is a rare event, as may be the case for example in the analysis of uncommon adverse events. We propose two new approximate interval estimators for a difference of cause-specific CIFs estimated in the presence of competing risks and random censoring. Theoretical analysis and simulations indicate that the new interval estimators are superior to the Wald CIs in the sense of avoiding substantial under-coverage with rare events, while being equivalent to the Wald CIs asymptotically. In the absence of censoring, one of the two proposed interval estimators reduces to the well-known Agresti–Caffo CI for a difference of two binomial parameters. The new methods can be easily implemented with any software package producing point and variance estimates for the Aalen–Johansen estimator, as illustrated in a real data example.  相似文献   

10.
Mapping quantitative trait loci with censored observations   总被引:2,自引:0,他引:2  
Diao G  Lin DY  Zou F 《Genetics》2004,168(3):1689-1698
The existing statistical methods for mapping quantitative trait loci (QTL) assume that the phenotype follows a normal distribution and is fully observed. These assumptions may not be satisfied when the phenotype pertains to the survival time or failure time, which has a skewed distribution and is usually subject to censoring due to random loss of follow-up or limited duration of the experiment. In this article, we propose an interval-mapping approach for censored failure time phenotypes. We formulate the effects of QTL on the failure time through parametric proportional hazards models and develop efficient likelihood-based inference procedures. In addition, we show how to assess genome-wide statistical significance. The performance of the proposed methods is evaluated through extensive simulation studies. An application to a mouse cross is provided.  相似文献   

11.
Mandel M  Betensky RA 《Biometrics》2007,63(2):405-412
Several goodness-of-fit tests of a lifetime distribution have been suggested in the literature; many take into account censoring and/or truncation of event times. In some contexts, a goodness-of-fit test for the truncation distribution is of interest. In particular, better estimates of the lifetime distribution can be obtained when knowledge of the truncation law is exploited. In cross-sectional sampling, for example, there are theoretical justifications for the assumption of a uniform truncation distribution, and several studies have used it to improve the efficiency of their survival estimates. The duality of lifetime and truncation in the absence of censoring enables methods for testing goodness of fit of the lifetime distribution to be used for testing goodness of fit of the truncation distribution. However, under random censoring, this duality does not hold and different tests are required. In this article, we introduce several goodness-of-fit tests for the truncation distribution and investigate their performance in the presence of censored event times using simulation. We demonstrate the use of our tests on two data sets.  相似文献   

12.

Longitudinal studies with binary outcomes characterized by informative right censoring are commonly encountered in clinical, basic, behavioral, and health sciences. Approaches developed to analyze data with binary outcomes were mainly tailored to clustered or longitudinal data with missing completely at random or at random. Studies that focused on informative right censoring with binary outcomes are characterized by their imbedded computational complexity and difficulty of implementation. Here we present a new maximum likelihood-based approach with repeated binary measures modeled in a generalized linear mixed model as a function of time and other covariates. The longitudinal binary outcome and the censoring process determined by the number of times a subject is observed share latent random variables (random intercept and slope) where these subject-specific random effects are common to both models. A simulation study and sensitivity analysis were conducted to test the model under different assumptions and censoring settings. Our results showed accuracy of the estimates generated under this model when censoring was fully informative or partially informative with dependence on the slopes. A successful implementation was undertaken on a cohort of renal transplant patients with blood urea nitrogen as a binary outcome measured over time to indicate normal and abnormal kidney function until the emanation of graft rejection that eventuated in informative right censoring. In addition to its novelty and accuracy, an additional key feature and advantage of the proposed model is its viability of implementation on available analytical tools and widespread application on any other longitudinal dataset with informative censoring.

  相似文献   

13.
Stare J  Perme MP  Henderson R 《Biometrics》2011,67(3):750-759
Summary There is no shortage of proposed measures of prognostic value of survival models in the statistical literature. They come under different names, including explained variation, correlation, explained randomness, and information gain, but their goal is common: to define something analogous to the coefficient of determination R2 in linear regression. None however have been uniformly accepted, none have been extended to general event history data, including recurrent events, and many cannot incorporate time‐varying effects or covariates. We present here a measure specifically tailored for use with general dynamic event history regression models. The measure is applicable and interpretable in discrete or continuous time; with tied data or otherwise; with time‐varying, time‐fixed, or dynamic covariates; with time‐varying or time‐constant effects; with single or multiple event times; with parametric or semiparametric models; and under general independent censoring/observation. For single‐event survival data with neither censoring nor time dependency it reduces to the concordance index. We give expressions for its population value and the variance of the estimator and explore its use in simulations and applications. A web link to R software is provided.  相似文献   

14.
Most statistical methods for censored survival data assume there is no dependence between the lifetime and censoring mechanisms, an assumption which is often doubtful in practice. In this paper we study a parametric model which allows for dependence in terms of a parameter delta and a bias function B(t, theta). We propose a sensitivity analysis on the estimate of the parameter of interest for small values of delta. This parameter measures the dependence between the lifetime and the censoring mechanisms. Its size can be interpreted in terms of a correlation coefficient between the two mechanisms. A medical example suggests that even a small degree of dependence between the failure and censoring processes can have a noticeable effect on the analysis.  相似文献   

15.
Dimension reduction methods have been proposed for regression analysis with predictors of high dimension, but have not received much attention on the problems with censored data. In this article, we present an iterative imputed spline approach based on principal Hessian directions (PHD) for censored survival data in order to reduce the dimension of predictors without requiring a prespecified parametric model. Our proposal is to replace the right-censored survival time with its conditional expectation for adjusting the censoring effect by using the Kaplan-Meier estimator and an adaptive polynomial spline regression in the residual imputation. A sparse estimation strategy is incorporated in our approach to enhance the interpretation of variable selection. This approach can be implemented in not only PHD, but also other methods developed for estimating the central mean subspace. Simulation studies with right-censored data are conducted for the imputed spline approach to PHD (IS-PHD) in comparison with two methods of sliced inverse regression, minimum average variance estimation, and naive PHD in ignorance of censoring. The results demonstrate that the proposed IS-PHD method is particularly useful for survival time responses approximating symmetric or bending structures. Illustrative applications to two real data sets are also presented.  相似文献   

16.
Neurobehavioral tests are used to assess early neonatal behavioral functioning and detect effects of prenatal and perinatal events. However, common measurement and data collection methods create specific data features requiring thoughtful statistical analysis. Assessment response measurements are often ordinal scaled, not interval scaled; the magnitude of the physical response may not directly correlate with the underlying state of developmental maturity; and a subject's assessment record may be censored. Censoring occurs when the milestone is exhibited at the first test (left censoring), when the milestone is not exhibited before the end of the study (right censoring), or when the exact age of attaining the milestone is uncertain due to irregularly spaced test sessions or missing data (interval censoring). Such milestone data is best analyzed using survival analysis methods. Two methods are contrasted: the non-parametric Kaplan-Meier estimator and the fully parametric interval censored regression. The methods represent the spectrum of survival analyses in terms of parametric assumptions, ability to handle simultaneous testing of multiple predictors, and accommodation of different types of censoring. Both methods were used to assess birth weight status and sex effects on 14 separate test items from assessments on 255 healthy pigtailed macaques. The methods gave almost identical results. Compared to the normal birth weight group, the low birth weight group had significantly delayed development on all but one test item. Within the low birth weight group, males had significantly delayed development for some responses relative to females.  相似文献   

17.
In survival analysis, the event time T is often subject to dependent censorship. Without assuming a parametric model between the failure and censoring times, the parameter Theta of interest, for example, the survival function of T, is generally not identifiable. On the other hand, the collection Omega of all attainable values for Theta may be well defined. In this article, we present nonparametric inference procedures for Omega in the presence of a mixture of dependent and independent censoring variables. By varying the criteria of classifying censoring to the dependent or independent category, our proposals can be quite useful for the so-called sensitivity analysis of censored failure times. The case that the failure time is subject to possibly dependent interval censorship is also discussed in this article. The new proposals are illustrated with data from two clinical studies on HIV-related diseases.  相似文献   

18.
Estimation in a Cox proportional hazards cure model   总被引:7,自引:0,他引:7  
Sy JP  Taylor JM 《Biometrics》2000,56(1):227-236
Some failure time data come from a population that consists of some subjects who are susceptible to and others who are nonsusceptible to the event of interest. The data typically have heavy censoring at the end of the follow-up period, and a standard survival analysis would not always be appropriate. In such situations where there is good scientific or empirical evidence of a nonsusceptible population, the mixture or cure model can be used (Farewell, 1982, Biometrics 38, 1041-1046). It assumes a binary distribution to model the incidence probability and a parametric failure time distribution to model the latency. Kuk and Chen (1992, Biometrika 79, 531-541) extended the model by using Cox's proportional hazards regression for the latency. We develop maximum likelihood techniques for the joint estimation of the incidence and latency regression parameters in this model using the nonparametric form of the likelihood and an EM algorithm. A zero-tail constraint is used to reduce the near nonidentifiability of the problem. The inverse of the observed information matrix is used to compute the standard errors. A simulation study shows that the methods are competitive to the parametric methods under ideal conditions and are generally better when censoring from loss to follow-up is heavy. The methods are applied to a data set of tonsil cancer patients treated with radiation therapy.  相似文献   

19.
Using the method of generalized threshold models, the problem is formulated and solved to evaluate the parametric stability of the model of a gene subnetwork controlling the early ontogenesis of the fruit fly Drosophila melanogaster. Computer experiments have been performed to test the parametric stability of the model. Quantitative evaluations have been obtained for parametric stability of the Drosophila gene subnetwork in nuclei along the embryo's anterior-posterior axis. The results of computer experiments have been compared with the previous research data on "sensitivity" of functioning regimes to random changes of the parameters in the models of prokaryotic and eukaryotic systems, namely the system controlling the lambda-phage development and the subsystem controlling the flower morphogenesis of Arabidopsis thaliana. The obtained results confirm high parametric stability of gene networks that control the development of organisms.  相似文献   

20.
Methods in the literature for missing covariate data in survival models have relied on the missing at random (MAR) assumption to render regression parameters identifiable. MAR means that missingness can depend on the observed exit time, and whether or not that exit is a failure or a censoring event. By considering ways in which missingness of covariate X could depend on the true but possibly censored failure time T and the true censoring time C, we attempt to identify missingness mechanisms which would yield MAR data. We find that, under various reasonable assumptions about how missingness might depend on T and/or C, additional strong assumptions are needed to obtain MAR. We conclude that MAR is difficult to justify in practical applications. One exception arises when missingness is independent of T, and C is independent of the value of the missing X. As alternatives to MAR, we propose two new missingness assumptions. In one, the missingness depends on T but not on C; in the other, the situation is reversed. For each, we show that the failure time model is identifiable. When missingness is independent of T, we show that the naive complete record analysis will yield a consistent estimator of the failure time distribution. When missingness is independent of C, we develop a complete record likelihood function and a corresponding estimator for parametric failure time models. We propose analyses to evaluate the plausibility of either assumption in a particular data set, and illustrate the ideas using data from the literature on this problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号