首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In risk assessment and environmental monitoring studies, concentration measurements frequently fall below detection limits (DL) of measuring instruments, resulting in left-censored data. The principal approaches for handling censored data include the substitution-based method, maximum likelihood estimation, robust regression on order statistics, and Kaplan-Meier. In practice, censored data are substituted with an arbitrary value prior to use of traditional statistical methods. Although some studies have evaluated the substitution performance in estimating population characteristics, they have focused mainly on normally and lognormally distributed data that contain a single DL. We employ Monte Carlo simulations to assess the impact of substitution when estimating population parameters based on censored data containing multiple DLs. We also consider different distributional assumptions including lognormal, Weibull, and gamma. We show that the reliability of the estimates after substitution is highly sensitive to distributional characteristics such as mean, standard deviation, skewness, and also data characteristics such as censoring percentage. The results highlight that although the performance of the substitution-based method improves as the censoring percentage decreases, its performance still depends on the population's distributional characteristics. Practical implications that follow from our findings indicate that caution must be taken in using the substitution method when analyzing censored environmental data.  相似文献   

2.
The classical model for the analysis of progression of markers in HIV-infected patients is the mixed effects linear model. However, longitudinal studies of viral load are complicated by left censoring of the measures due to a lower quantification limit. We propose a full likelihood approach to estimate parameters from the linear mixed effects model for left-censored Gaussian data. For each subject, the contribution to the likelihood is the product of the density for the vector of the completely observed outcome and of the conditional distribution function of the vector of the censored outcome, given the observed outcomes. Values of the distribution function were computed by numerical integration. The maximization is performed by a combination of the Simplex algorithm and the Marquardt algorithm. Subject-specific deviations and random effects are estimated by modified empirical Bayes replacing censored measures by their conditional expectations given the data. A simulation study showed that the proposed estimators are less biased than those obtained by imputing the quantification limit to censored data. Moreover, for models with complex covariance structures, they are less biased than Monte Carlo expectation maximization (MCEM) estimators developed by Hughes (1999) Mixed effects models with censored data with application to HIV RNA Levels. Biometrics 55, 625-629. The method was then applied to the data of the ALBI-ANRS 070 clinical trial for which HIV-1 RNA levels were measured with an ultrasensitive assay (quantification limit 50 copies/ml). Using the proposed method, estimates obtained with data artificially censored at 500 copies/ml were close to those obtained with the real data set.  相似文献   

3.
Kim YJ 《Biometrics》2006,62(2):458-464
In doubly censored failure time data, the survival time of interest is defined as the elapsed time between an initial event and a subsequent event, and the occurrences of both events cannot be observed exactly. Instead, only right- or interval-censored observations on the occurrence times are available. For the analysis of such data, a number of methods have been proposed under the assumption that the survival time of interest is independent of the occurrence time of the initial event. This article investigates a different situation where the independence may not be true with the focus on regression analysis of doubly censored data. Cox frailty models are applied to describe the effects of covariates and an EM algorithm is developed for estimation. Simulation studies are performed to investigate finite sample properties of the proposed method and an illustrative example from an acquired immune deficiency syndrome (AIDS) cohort study is provided.  相似文献   

4.
具有左截断、右删失寿命数据类型的生命表编制方法   总被引:8,自引:0,他引:8  
左截断数据是一类特殊的寿命数据类型,其定义为一些动物个体并非在初始时间(出生或孵化)而是在某个时间(年龄)延滞之后才进入调查取样范围而收集到的一类寿命数据。传统的乘积限估计法只能处理寿终数据和右删失数据,对左截断数据则无能为力。本文提出一种对乘积限估计法改进方法,此种方法能同时处理寿终数据、右删失数据和左截断数据,从而有效地利用了左截断数据所含有的物种存活信息。  相似文献   

5.
Sun L  Kim YJ  Sun J 《Biometrics》2004,60(3):637-643
Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics 49, 13-22; Sun et al., 1999, Biometrics 55, 909-914; Pan, 2001, Biometrics 57, 1245-1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study.  相似文献   

6.
A Bayesian survival analysis is presented to examine the effect of fluoride-intake on the time to caries development of the permanent first molars in children between 7 and 12 years of age using a longitudinal study conducted in Flanders. Three problems needed to be addressed. Firstly, since the emergence time of a tooth and the time it experiences caries were recorded yearly, the time to caries is doubly interval censored. Secondly, due to the setup of the study, many emergence times were left-censored. Thirdly, events on teeth of the same child are dependent. Our Bayesian analysis is a modified version of the intensity model of Harkanen et al. (2000, Scandinavian Journal of Statistics 27, 577-588). To tackle the problem of the large number of left-censored observations a similar Finnish data set was introduced. Our analysis shows no convincing effect of fluoride-intake on caries development.  相似文献   

7.
A mixed-model procedure for analysis of censored data assuming a multivariate normal distribution is described. A Bayesian framework is adopted which allows for estimation of fixed effects and variance components and prediction of random effects when records are left-censored. The procedure can be extended to right- and two-tailed censoring. The model employed is a generalized linear model, and the estimation equations resemble those arising in analysis of multivariate normal or categorical data with threshold models. Estimates of variance components are obtained using expressions similar to those employed in the EM algorithm for restricted maximum likelihood (REML) estimation under normality.  相似文献   

8.
Lachos VH  Bandyopadhyay D  Dey DK 《Biometrics》2011,67(4):1594-1604
HIV RNA viral load measures are often subjected to some upper and lower detection limits depending on the quantification assays. Hence, the responses are either left or right censored. Linear (and nonlinear) mixed-effects models (with modifications to accommodate censoring) are routinely used to analyze this type of data and are based on normality assumptions for the random terms. However, those analyses might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear (and nonlinear) models replacing the Gaussian assumptions for the random terms with normal/independent (NI) distributions. The NI is an attractive class of symmetric heavy-tailed densities that includes the normal, Student's-t, slash, and the contaminated normal distributions as special cases. The marginal likelihood is tractable (using approximations for nonlinear models) and can be used to develop Bayesian case-deletion influence diagnostics based on the Kullback-Leibler divergence. The newly developed procedures are illustrated with two HIV AIDS studies on viral loads that were initially analyzed using normal (censored) mixed-effects models, as well as simulations.  相似文献   

9.
Wei Pan 《Biometrics》2001,57(4):1245-1250
Sun, Liao, and Pagano (1999) proposed an interesting estimating equation approach to Cox regression with doubly censored data. Here we point out that a modification of their proposal leads to a multiple imputation approach, where the double censoring is reduced to single censoring by imputing for the censored initiating times. For each imputed data set one can take advantage of many existing techniques and software for singly censored data. Under the general framework of multiple imputation, the proposed method is simple to implement and can accommodate modeling issues such as model checking, which has not been adequately discussed previously in the literature for doubly censored data. Here we illustrate our method with an application to a formal goodness-of-fit test and a graphical check for the proportional hazards model for doubly censored data. We reanalyze a well-known AIDS data set.  相似文献   

10.
This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right‐censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit‐of‐detection. For randomly censored covariates, an often‐used method is the inefficient complete‐case analysis (CCA) which consists in deleting censored observations in the data analysis. When censoring is not completely independent, the CCA leads to biased and spurious results. Methods for missing covariate data, including type I and type II covariate censoring as well as limit‐of‐detection do not readily apply due to the fundamentally different nature of randomly censored covariates. We develop a novel method for censored covariates using a conditional mean imputation based on either Kaplan–Meier estimates or a Cox proportional hazards model to estimate the effects of these covariates on a time‐to‐event outcome. We evaluate the performance of the proposed method through simulation studies and show that it provides good bias reduction and statistical efficiency. Finally, we illustrate the method using data from the Framingham Heart Study to assess the relationship between offspring and parental age of onset of cardiovascular events.  相似文献   

11.
Summary The standard estimator for the cause‐specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is a weighted empirical cumulative distribution function and the other a product‐limit estimator. This equivalence suggests an alternative view of the analysis of time‐to‐event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause‐specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause‐specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time‐dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non‐AIDS related cumulative mortality.  相似文献   

12.
Analysis of failure time data with dependent interval censoring   总被引:1,自引:0,他引:1  
This article develops a method for the analysis of screening data for which the chance of being screened is dependent on the event of interest (informative censoring). Because not all subjects make all screening visits, the data on the failure of interest is interval censored. We propose a model that will properly adjust for the dependence to obtain an unbiased estimate of the nonparametric failure time function, and we provide an extension for applying the method for estimation of the regression parameters from a (discrete time) proportional hazards regression model. The method is applied on a data set from an observational study of cytomegalovirus shedding in a population of HIV-infected subjects who participated in a trial conducted by the AIDS Clinical Trials Group.  相似文献   

13.
Statistical analysis of longitudinal data often involves modeling treatment effects on clinically relevant longitudinal biomarkers since an initial event (the time origin). In some studies including preventive HIV vaccine efficacy trials, some participants have biomarkers measured starting at the time origin, whereas others have biomarkers measured starting later with the time origin unknown. The semiparametric additive time-varying coefficient model is investigated where the effects of some covariates vary nonparametrically with time while the effects of others remain constant. Weighted profile least squares estimators coupled with kernel smoothing are developed. The method uses the expectation maximization approach to deal with the censored time origin. The Kaplan–Meier estimator and other failure time regression models such as the Cox model can be utilized to estimate the distribution and the conditional distribution of left censored event time related to the censored time origin. Asymptotic properties of the parametric and nonparametric estimators and consistent asymptotic variance estimators are derived. A two-stage estimation procedure for choosing weight is proposed to improve estimation efficiency. Numerical simulations are conducted to examine finite sample properties of the proposed estimators. The simulation results show that the theory and methods work well. The efficiency gain of the two-stage estimation procedure depends on the distribution of the longitudinal error processes. The method is applied to analyze data from the Merck 023/HVTN 502 Step HIV vaccine study.  相似文献   

14.
Area under the receiver operating characteristic curve (AROC) is commonly used to choose a biomechanical metric from which to construct an injury risk curve (IRC). However, AROC may not handle censored datasets adequately. Survival analysis creates robust estimates of IRCs which accommodate censored data. We present an observation-adjusted ROC (oaROC) which uses the survival-based IRC to estimate the AROC. We verified and evaluated this method using simulated datasets of different censoring statuses and sample sizes. For a dataset with 1000 left and right censored observations, the median AROC closely approached the oaROCTrue, or the oaROC calculated using an assumed “true” IRC, differing by a fraction of a percent, 0.1%. Using simulated datasets with various censoring, we found that oaROC converged onto oaROCTrue in all cases. For datasets with right and non-censored observations, AROC did not converge onto oaROCTrue. oaROC for datasets with only non-censored observations converged the fastest, and for a dataset with 10 observations, the median oaROC differed from oaROCTrue by 2.74% while the corresponding median AROC with left and right censored data differed from oaROCTrue by 9.74%. We also calculated the AROC and oaROC for a published side impact dataset, and differences between the two methods ranged between −24.08% and 24.55% depending on metric. Overall, when compared with AROC, we found oaROC performs equivalently for doubly censored data, better for non-censored data, and can accommodate more types of data than AROC. While more validation is needed, the results indicate that oaROC is a viable alternative which can be incorporated into the metric selection process for IRCs.  相似文献   

15.
We develop an approach, based on multiple imputation, to using auxiliary variables to recover information from censored observations in survival analysis. We apply the approach to data from an AIDS clinical trial comparing ZDV and placebo, in which CD4 count is the time-dependent auxiliary variable. To facilitate imputation, a joint model is developed for the data, which includes a hierarchical change-point model for CD4 counts and a time-dependent proportional hazards model for the time to AIDS. Markov chain Monte Carlo methods are used to multiply impute event times for censored cases. The augmented data are then analyzed and the results combined using standard multiple-imputation techniques. A comparison of our multiple-imputation approach to simply analyzing the observed data indicates that multiple imputation leads to a small change in the estimated effect of ZDV and smaller estimated standard errors. A sensitivity analysis suggests that the qualitative findings are reproducible under a variety of imputation models. A simulation study indicates that improved efficiency over standard analyses and partial corrections for dependent censoring can result. An issue that arises with our approach, however, is whether the analysis of primary interest and the imputation model are compatible.  相似文献   

16.
This paper discusses two‐sample comparison in the case of interval‐censored failure time data. For the problem, one common approach is to employ some nonparametric test procedures, which usually give some p‐values but not a direct or exact quantitative measure of the survival or treatment difference of interest. In particular, these procedures cannot provide a hazard ratio estimate, which is commonly used to measure the difference between the two treatments or samples. For interval‐censored data, a few nonparametric test procedures have been developed, but it does not seem to exist as a procedure for hazard ratio estimation. Corresponding to this, we present two procedures for nonparametric estimation of the hazard ratio of the two samples for interval‐censored data situations. They are generalizations of the corresponding procedures for right‐censored failure time data. An extensive simulation study is conducted to evaluate the performance of the two procedures and indicates that they work reasonably well in practice. For illustration, they are applied to a set of interval‐censored data arising from a breast cancer study.  相似文献   

17.
Distribution-free regression analysis of grouped survival data   总被引:1,自引:0,他引:1  
Methods based on regression models for logarithmic hazard functions, Cox models, are given for analysis of grouped and censored survival data. By making an approximation it is possible to obtain explicitly a maximum likelihood function involving only the regression parameters. This likelihood function is a convenient analog to Cox's partial likelihood for ungrouped data. The method is applied to data from a toxicological experiment.  相似文献   

18.
Variance-component (VC) methods are flexible and powerful procedures for the mapping of genes that influence quantitative traits. However, traditional VC methods make the critical assumption that the quantitative-trait data within a family either follow or can be transformed to follow a multivariate normal distribution. Violation of the multivariate normality assumption can occur if trait data are censored at some threshold value. Trait censoring can arise in a variety of ways, including assay limitation or confounding due to medication. Valid linkage analyses of censored data require the development of a modified VC method that directly models the censoring event. Here, we present such a model, which we call the "tobit VC method." Using simulation studies, we compare and contrast the performance of the traditional and tobit VC methods for linkage analysis of censored trait data. For the simulation settings that we considered, our results suggest that (1) analyses of censored data by using the traditional VC method lead to severe bias in parameter estimates and a modest increase in false-positive linkage findings, (2) analyses with the tobit VC method lead to unbiased parameter estimates and type I error rates that reflect nominal levels, and (3) the tobit VC method has a modest increase in linkage power as compared with the traditional VC method. We also apply the tobit VC method to censored data from the Finland-United States Investigation of Non-Insulin-Dependent Diabetes Mellitus Genetics study and provide two examples in which the tobit VC method yields noticeably different results as compared with the traditional method.  相似文献   

19.
Multiple lower limits of quantification (MLOQs) result if various laboratories are involved in the analysis of concentration data and some observations are too low to be quantified. For normally distributed data under MLOQs there exists only the multiple regression method of Helsel to estimate the mean and variance. We propose a simple imputation method and two new maximum likelihood estimation methods: the multiple truncated sample method and the multiple censored sample method. A simulation study is conducted to compare the performances of the newly introduced methods to Helsel's via the criteria root mean squared error (RMSE) and bias of the parameter estimates. Two and four lower limits of quantification (LLOQs), various amounts of unquantifiable observations and two sample sizes are studied. Furthermore, the robustness is investigated under model misspecification. The methods perform with decreasing accuracy for increasing rates of unquantified observations. Increasing sample sizes lead to smaller bias. There is almost no change in the performance between two and four LLOQs. The magnitude of the variance impairs the performance of all methods. For a smaller variance, the multiple censored sample method leads to superior estimates regarding the RMSE and bias, whereas Helsel's method is superior regarding the bias for a larger variance. Under model misspecification, Helsel's method was inferior to the other methods. Estimating the mean, the multiple censored sample method performed better, whereas the multiple truncated sample method performs best in estimating the variance. Summarizing, for a large sample size and normally distributed data we recommend to use Helsel's method. Otherwise, the multiple censored sample method should be used to obtain estimates of the mean and variance of data including MLOQs.  相似文献   

20.
Quantitative trait loci (QTL) are usually searched for using classical interval mapping methods which assume that the trait of interest follows a normal distribution. However, these methods cannot take into account features of most survival data such as a non-normal distribution and the presence of censored data. We propose two new QTL detection approaches which allow the consideration of censored data. One interval mapping method uses a Weibull model (W), which is popular in parametrical modelling of survival traits, and the other uses a Cox model (C), which avoids making any assumption on the trait distribution. Data were simulated following the structure of a published experiment. Using simulated data, we compare W, C and a classical interval mapping method using a Gaussian model on uncensored data (G) or on all data (G'=censored data analysed as though records were uncensored). An adequate mathematical transformation was used for all parametric methods (G, G' and W). When data were not censored, the four methods gave similar results. However, when some data were censored, the power of QTL detection and accuracy of QTL location and of estimation of QTL effects for G decreased considerably with censoring, particularly when censoring was at a fixed date. This decrease with censoring was observed also with G', but it was less severe. Censoring had a negligible effect on results obtained with the W and C methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号