首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Supplementary observational data in the community setting are required to better assess the predictors of colorectal polyp recurrence and the effectiveness of colonoscopy surveillance under real circumstances.

Aim

The goal of this study was to identify patient characteristics and polyp features at baseline colonoscopy that are associated with the recurrence of colorectal polyps (including hyperplastic polyps) among patients consulting private practice physicians.

Patients and Methods

This cohort study was conducted from March 2004 to December 2010 in 26 private gastroenterology practices (France). It included 1023 patients with a first-time diagnosis of histologically confirmed polyp removed during a diagnostic or screening colonoscopy. At enrollment, interviews were conducted to obtain data on socio-demographic variables and risk factors. Pathology reports were reviewed to abstract data on polyp features at baseline colonoscopy. Colorectal polyps diagnosed at the surveillance colonoscopy were considered as end points. The time to event was analyzed with an accelerated failure time model assuming a Weibull distribution.

Results

Among the 1023 patients with colorectal polyp at baseline, 553 underwent a surveillance colonoscopy. The mean time interval from baseline colonoscopy to first surveillance examination was 3.42 (standard deviation, 1.45) years. The recurrence rates were 50.5% and 32.9% for all polyps and adenomas, respectively. In multivariate models, the number of polyps at baseline was the only significant predictor for both polyp recurrence (hazard ratio [HR] 1.19, 95% CI 1.06 to 1.33), and adenoma recurrence (HR 1.17, 95% CI 1.03 to 1.34).

Conclusion

The efficacy of surveillance colonoscopy in community gastroenterology practice compared favorably with academic settings. This study provides further evidence that the number of initial colorectal polyps is useful for predicting the risk of polyp recurrence, even in the community setting.  相似文献   

2.
Hsieh JJ  Ding AA  Wang W 《Biometrics》2011,67(3):719-729
Summary Recurrent events data are commonly seen in longitudinal follow‐up studies. Dependent censoring often occurs due to death or exclusion from the study related to the disease process. In this article, we assume flexible marginal regression models on the recurrence process and the dependent censoring time without specifying their dependence structure. The proposed model generalizes the approach by Ghosh and Lin (2003, Biometrics 59, 877–885). The technique of artificial censoring provides a way to maintain the homogeneity of the hypothetical error variables under dependent censoring. Here we propose to apply this technique to two Gehan‐type statistics. One considers only order information for pairs whereas the other utilizes additional information of observed censoring times available for recurrence data. A model‐checking procedure is also proposed to assess the adequacy of the fitted model. The proposed estimators have good asymptotic properties. Their finite‐sample performances are examined via simulations. Finally, the proposed methods are applied to analyze the AIDS linked to the intravenous experiences cohort data.  相似文献   

3.
In the study of multiple failure time data with recurrent clinical endpoints, the classical independent censoring assumption in survival analysis can be violated when the evolution of the recurrent events is correlated with a censoring mechanism such as death. Moreover, in some situations, a cure fraction appears in the data because a tangible proportion of the study population benefits from treatment and becomes recurrence free and insusceptible to death related to the disease. A bivariate joint frailty mixture cure model is proposed to allow for dependent censoring and cure fraction in recurrent event data. The latency part of the model consists of two intensity functions for the hazard rates of recurrent events and death, wherein a bivariate frailty is introduced by means of the generalized linear mixed model methodology to adjust for dependent censoring. The model allows covariates and frailties in both the incidence and the latency parts, and it further accounts for the possibility of cure after each recurrence. It includes the joint frailty model and other related models as special cases. An expectation-maximization (EM)-type algorithm is developed to provide residual maximum likelihood estimation of model parameters. Through simulation studies, the performance of the model is investigated under different magnitudes of dependent censoring and cure rate. The model is applied to data sets from two colorectal cancer studies to illustrate its practical value.  相似文献   

4.
In survival analysis, the event time T is often subject to dependent censorship. Without assuming a parametric model between the failure and censoring times, the parameter Theta of interest, for example, the survival function of T, is generally not identifiable. On the other hand, the collection Omega of all attainable values for Theta may be well defined. In this article, we present nonparametric inference procedures for Omega in the presence of a mixture of dependent and independent censoring variables. By varying the criteria of classifying censoring to the dependent or independent category, our proposals can be quite useful for the so-called sensitivity analysis of censored failure times. The case that the failure time is subject to possibly dependent interval censorship is also discussed in this article. The new proposals are illustrated with data from two clinical studies on HIV-related diseases.  相似文献   

5.
Summary In medical studies of time‐to‐event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time‐dependent treatment effects is to model the time‐dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse‐weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment‐specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite‐sample properties through simulation. The proposed methods are used to compare kidney wait‐list mortality by race.  相似文献   

6.
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions.  相似文献   

7.
Matsui S 《Biometrics》2004,60(4):965-976
This article develops randomization-based methods for times to repeated events in two-arm randomized trials with noncompliance and dependent censoring. Structural accelerated failure time models are assumed to capture causal effects on repeated event times and dependent censoring time, but the dependence structure among repeated event times and dependent censoring time is unspecified. Artificial censoring techniques to accommodate nonrandom noncompliance and dependent censoring are proposed. Estimation of the acceleration parameters are based on rank-based estimating functions. A simulation study is conducted to evaluate the performance of the developed methods. An illustration of the methods using data from an acute myeloid leukemia trial is provided.  相似文献   

8.
We develop an approach, based on multiple imputation, to using auxiliary variables to recover information from censored observations in survival analysis. We apply the approach to data from an AIDS clinical trial comparing ZDV and placebo, in which CD4 count is the time-dependent auxiliary variable. To facilitate imputation, a joint model is developed for the data, which includes a hierarchical change-point model for CD4 counts and a time-dependent proportional hazards model for the time to AIDS. Markov chain Monte Carlo methods are used to multiply impute event times for censored cases. The augmented data are then analyzed and the results combined using standard multiple-imputation techniques. A comparison of our multiple-imputation approach to simply analyzing the observed data indicates that multiple imputation leads to a small change in the estimated effect of ZDV and smaller estimated standard errors. A sensitivity analysis suggests that the qualitative findings are reproducible under a variety of imputation models. A simulation study indicates that improved efficiency over standard analyses and partial corrections for dependent censoring can result. An issue that arises with our approach, however, is whether the analysis of primary interest and the imputation model are compatible.  相似文献   

9.
Yi Li  Lu Tian  Lee‐Jen Wei 《Biometrics》2011,67(2):427-435
Summary In a longitudinal study, suppose that the primary endpoint is the time to a specific event. This response variable, however, may be censored by an independent censoring variable or by the occurrence of one of several dependent competing events. For each study subject, a set of baseline covariates is collected. The question is how to construct a reliable prediction rule for the future subject's profile of all competing risks of interest at a specific time point for risk‐benefit decision making. In this article, we propose a two‐stage procedure to make inferences about such subject‐specific profiles. For the first step, we use a parametric model to obtain a univariate risk index score system. We then estimate consistently the average competing risks for subjects who have the same parametric index score via a nonparametric function estimation procedure. We illustrate this new proposal with the data from a randomized clinical trial for evaluating the efficacy of a treatment for prostate cancer. The primary endpoint for this study was the time to prostate cancer death, but had two types of dependent competing events, one from cardiovascular death and the other from death of other causes.  相似文献   

10.
Pan W  Zeng D 《Biometrics》2011,67(3):996-1006
We study the estimation of mean medical cost when censoring is dependent and a large amount of auxiliary information is present. Under missing at random assumption, we propose semiparametric working models to obtain low-dimensional summarized scores. An estimator for the mean total cost can be derived nonparametrically conditional on the summarized scores. We show that when either the two working models for cost-survival process or the model for censoring distribution is correct, the estimator is consistent and asymptotically normal. Small-sample performance of the proposed method is evaluated via simulation studies. Finally, our approach is applied to analyze a real data set in health economics.  相似文献   

11.
Summary Accurately assessing a patient’s risk of a given event is essential in making informed treatment decisions. One approach is to stratify patients into two or more distinct risk groups with respect to a specific outcome using both clinical and demographic variables. Outcomes may be categorical or continuous in nature; important examples in cancer studies might include level of toxicity or time to recurrence. Recursive partitioning methods are ideal for building such risk groups. Two such methods are Classification and Regression Trees (CART) and a more recent competitor known as the partitioning Deletion/Substitution/Addition (partDSA) algorithm, both of which also utilize loss functions (e.g., squared error for a continuous outcome) as the basis for building, selecting, and assessing predictors but differ in the manner by which regression trees are constructed. Recently, we have shown that partDSA often outperforms CART in so‐called “full data” settings (e.g., uncensored outcomes). However, when confronted with censored outcome data, the loss functions used by both procedures must be modified. There have been several attempts to adapt CART for right‐censored data. This article describes two such extensions for partDSA that make use of observed data loss functions constructed using inverse probability of censoring weights. Such loss functions are consistent estimates of their uncensored counterparts provided that the corresponding censoring model is correctly specified. The relative performance of these new methods is evaluated via simulation studies and illustrated through an analysis of clinical trial data on brain cancer patients. The implementation of partDSA for uncensored and right‐censored outcomes is publicly available in the R package, partDSA .  相似文献   

12.
In this article, we present a method for estimating and comparing the treatment-specific distributions of a discrete time-to-event variable from right-censored data. Our method allows for (1) adjustment for informative censoring due to measured prognostic factors for time to event and censoring and (2) quantification of the sensitivity of the inference to residual dependence between time to event and censoring due to unmeasured factors. We develop our approach in the context of a randomized trial for the treatment of chronic schizophrenia. We perform a simulation study to assess the practical performance of our methodology.  相似文献   

13.
Randomized clinical trials with time-to-event endpoints are frequently stopped after a prespecified number of events has been observed. This practice leads to dependent data and nonrandom censoring, which can in general not be solved by conditioning on the underlying baseline information. In case of staggered study entry, matters are complicated substantially. The present paper demonstrates that the study design at hand entails general independent censoring in the counting process sense, provided that the analysis is based on study time information only. To illustrate that the filtrations must not use abundant information, we simulated data of event-driven trials and evaluated them by means of Cox regression models with covariates for the calendar times. The Breslow curves of the cumulative baseline hazard showed considerable deviations, which implies that the analysis is disturbed by conditioning on the calendar time variables. A second simulation study further revealed that Efron's classical bootstrap, unlike the (martingale-based) wild bootstrap, may lead to biased results in the given setting, as the assumption of random censoring is violated. This is exemplified by an analysis of data on immunotherapy in patients with advanced, previously treated nonsmall cell lung cancer.  相似文献   

14.
Correlating Two Viral Load Assays with Known Detection Limits   总被引:1,自引:0,他引:1  
A timely objective common to many HIV studies involves assessing the correlation between two different measures of viral load obtained from each of a sample of patients. This correlation has scientific utility in a number of contexts, including those aimed at a comparison of competing assays for quantifying virus and those aimed at determining the level of association between viral loads in two different reservoirs using the same assay. A complication for the analyst seeking valid point and interval estimates of such a correlation is the fact that both variables may be subject to left censoring due to values below assay detection limits. We address this problem using a bivariate normal likelihood that accounts for left censoring of two variables that may have different detection limits. We provide simulation results to evaluate sampling properties of the resulting correlation estimator and compare it with ad hoc estimators in the presence of nondetects. In an effort to obtain improved confidence interval properties relative to the Wald approach, we evaluate and compare profile likelihood-based intervals. We apply the methods to HIV viral load data on women and infants from a trial in Bangkok, Thailand, and we discuss an extension of the original model to accommodate interval censoring arising due to the study design.  相似文献   

15.
Frizzled homolog 3 receptor was up-regulated in several gastrointestinal cancers such as esophageal and gastric cancers. Moreover, frizzled homolog 3 has recently reported to be expressed in colorectal adenoma specimens. In the present study, we investigated the clinical significance of frizzled homolog 3 protein in colorectal cancer patients. Using immunocytochemical staining, frizzled homolog 3 expression was examined in 186 colorectal cancer specimens, 79 colorectal adenoma specimens, 133 colorectal polyp specimens, 127 colorectal cancer specimens with lymph node and/or distant metastasis, 310 specimens of various non-colorectal cancer metastatic carcinomas and 40 specimens with simultaneous occurrence of colorectal cancer, colorectal adenoma and colorectal polyp. Statistical analysis was used to correlate frizzled homolog 3 protein expression to the clinicohistopathological factors, recurrence/metastasis and survival after follow-up for 42 months in colorectal cancer patients. Frizzled homolog 3 protein was expressed in 100% colorectal cancer specimens, 89% colorectal adenoma specimens, 75% colorectal polyp specimens and 69% normal colorectal epithelial tissues. Moreover, frizzled homolog 3 immunocytochemical scores were highly correlated with colorectal cancer progression. Furthermore, frizzled homolog 3 was expressed in a comparatively lower percentage of metastatic hepatocellular carcinoma and metastatic renal clear cell carcinoma with focal and very weak staining than other metastatic tumor types. On the other hand, the frizzled homolog 3 immunocytochemical scores of colorectal adenomas with synchronous colorectal carcinomas were significantly higher than those of pure colorectal adenomas. Statistical analysis showed that frizzled homolog 3 immunocytochemical scores were associated with Dukes stage and lymph node status. Finally, stratified groups of colorectal cancer patients had significant differences in their recurrence/metastasis and survival. In conclusion, the present large-scale study has clearly showed that frizzled homolog 3 protein can generate clinically important information for colorectal cancer patients.  相似文献   

16.
Analysis of failure time data with dependent interval censoring   总被引:1,自引:0,他引:1  
This article develops a method for the analysis of screening data for which the chance of being screened is dependent on the event of interest (informative censoring). Because not all subjects make all screening visits, the data on the failure of interest is interval censored. We propose a model that will properly adjust for the dependence to obtain an unbiased estimate of the nonparametric failure time function, and we provide an extension for applying the method for estimation of the regression parameters from a (discrete time) proportional hazards regression model. The method is applied on a data set from an observational study of cytomegalovirus shedding in a population of HIV-infected subjects who participated in a trial conducted by the AIDS Clinical Trials Group.  相似文献   

17.
Independent censoring is a crucial assumption in survival analysis. However, this is impractical in many medical studies, where the presence of dependent censoring leads to difficulty in analyzing covariate effects on disease outcomes. The semicompeting risks framework offers one approach to handling dependent censoring. There are two representative estimators based on an artificial censoring technique in this data structure. However, neither of these estimators is better than another with respect to efficiency (standard error). In this paper, we propose a new weighted estimator for the accelerated failure time (AFT) model under dependent censoring. One of the advantages in our approach is that these weights are optimal among all the linear combinations of the previously mentioned two estimators. To calculate these weights, a novel resampling-based scheme is employed. Attendant asymptotic statistical results for the estimator are established. In addition, simulation studies, as well as an application to real data, show the gains in efficiency for our estimator.  相似文献   

18.
AIDS Clinical Trial Group (ACTG) randomized trial 021 compared the effect of bactrim versus aerosolized pentamidine (AP) as prophylaxis therapy for pneumocystis pneumonia (PCP) in AIDS patients. Although patients randomized to the bactrim arm experienced a significant delay in time to PCP, the survival experience in the two arms was not significantly different (p = .32). In this paper, we present evidence that bactrim therapy improves survival but that the standard intent-to-treat comparison failed to detect this survival advantage because a large fraction of the subjects either crossed over to the other therapy or stopped therapy altogether. We obtain our evidence of a beneficial bactrim effect on survival by artificially regarding the subjects as dependently censored at the first time the subject either stops or switches therapy; we then analyze the data with the inverse probability of censoring weighted Kaplan-Meier and Cox partial likelihood estimators of Robins (1993, Proceedings of the Biopharmaceutical Section, American Statistical Association, pp. 24-33) that adjust for dependent censoring by utilizing data collected on time-dependent prognostic factors.  相似文献   

19.
Chang SH 《Biometrics》2000,56(1):183-189
A longitudinal study is conducted to compare the process of particular disease between two groups. The process of the disease is monitored according to which of several ordered events occur. In the paper, the sojourn time between two successive events is considered as the outcome of interest. The group effects on the sojourn times of the multiple events are parameterized by scale changes in a semiparametric accelerated failure time model where the dependence structure among the multivariate sojourn times is unspecified. Suppose that the sojourn times are subject to dependent censoring and the censoring times are observed for all subjects. A log-rank-type estimating approach by rescaling the sojourn times and the dependent censoring times into the same distribution is constructed to estimate the group effects and the corresponding estimators are consistent and asymptotically normal. Without the dependent censoring, the independent censoring times in general are not available for the uncensored data. In order to complete the censoring information, pseudo-censoring times are generated from the corresponding nonparametrically estimated survival function in each group, and we can still obtained unbiased estimating functions for the group effects. A real application and a simulation study are conducted to illustrate the proposed methods.  相似文献   

20.

Background

To preserve patient anonymity, health register data may be provided as binned data only. Here we consider as example, how to estimate mean survival time after a diagnosis of metastatic colorectal cancer from Norwegian register data on time to death or censoring binned into 30 day intervals. All events occurring in the first three months (90 days) after diagnosis were removed to achieve comparability with a clinical trial. The aim of the paper is to develop and implement a simple, and yet flexible method for analyzing such interval censored and truncated data.

Methods

Considering interval censoring a missing data problem, we implement a simple multiple imputation strategy that allows flexible sensitivity analyses with respect to the shape of the censoring distribution. To allow identification of appropriate parametric models, a χ2-goodness-of-fit test--also imputation based--is derived and supplemented with diagnostic plots. Uncertainty estimates for mean survival times are obtained via a simulation strategy. The validity and statistical efficiency of the proposed method for varying interval lengths is investigated in a simulation study and compared with simpler alternatives.

Results

Mean survival times estimated from the register data ranged from 1.2 (SE = 0.09) to 3.2 (0.31) years depending on period of diagnosis and choice of parametric model. The shape of the censoring distribution within intervals did generally not influence results, whereas the choice of parametric model did, even when different models fit the data equally well. In simulation studies both simple midpoint imputation and multiple imputation yielded nearly unbiased analyses (relative biases of -0.6% to 9.4%) and confidence intervals with near-nominal coverage probabilities (93.4% to 95.7%) for censoring intervals shorter than six months. For 12 month censoring intervals, multiple imputation provided better protection against bias, and coverage probabilities closer to nominal values than simple midpoint imputation.

Conclusion

Binning of event and censoring times should be considered a viable strategy for anonymizing register data on survival times, as they may be readily analyzed with methods based on multiple imputation.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号