首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Zhao H  Zuo C  Chen S  Bang H 《Biometrics》2012,68(3):717-725
Summary Increasingly, estimations of health care costs are used to evaluate competing treatments or to assess the expected expenditures associated with certain diseases. In health policy and economics, the primary focus of these estimations has been on the mean cost, because the total cost can be derived directly from the mean cost, and because information about total resources utilized is highly relevant for policymakers. Yet, the median cost also could be important, both as an intuitive measure of central tendency in cost distribution and as a subject of interest to payers and consumers. In many prospective studies, cost data collection is sometimes incomplete for some subjects due to right censoring, which typically is caused by loss to follow-up or by limited study duration. Censoring poses a unique challenge for cost data analysis because of so-called induced informative censoring, in that traditional methods suited for survival data generally are invalid in censored cost estimation. In this article, we propose methods for estimating the median cost and its confidence interval (CI) when data are subject to right censoring. We also consider the estimation of the ratio and difference of two median costs and their CIs. These methods can be extended to the estimation of other quantiles and other informatively censored data. We conduct simulation and real data analysis in order to examine the performance of the proposed methods.  相似文献   

2.
Researchers in observational survival analysis are interested in not only estimating survival curve nonparametrically but also having statistical inference for the parameter. We consider right-censored failure time data where we observe n independent and identically distributed observations of a vector random variable consisting of baseline covariates, a binary treatment at baseline, a survival time subject to right censoring, and the censoring indicator. We assume the baseline covariates are allowed to affect the treatment and censoring so that an estimator that ignores covariate information would be inconsistent. The goal is to use these data to estimate the counterfactual average survival curve of the population if all subjects are assigned the same treatment at baseline. Existing observational survival analysis methods do not result in monotone survival curve estimators, which is undesirable and may lose efficiency by not constraining the shape of the estimator using the prior knowledge of the estimand. In this paper, we present a one-step Targeted Maximum Likelihood Estimator (TMLE) for estimating the counterfactual average survival curve. We show that this new TMLE can be executed via recursion in small local updates. We demonstrate the finite sample performance of this one-step TMLE in simulations and an application to a monoclonal gammopathy data.  相似文献   

3.
Censored survival data are common in clinical trial studies. We propose a unified framework for sensitivity analysis to censoring at random in survival data using multiple imputation and martingale, called SMIM. The proposed framework adopts the δ-adjusted and control-based models, indexed by the sensitivity parameter, entailing censoring at random and a wide collection of censoring not at random assumptions. Also, it targets a broad class of treatment effect estimands defined as functionals of treatment-specific survival functions, taking into account missing data due to censoring. Multiple imputation facilitates the use of simple full-sample estimation; however, the standard Rubin's combining rule may overestimate the variance for inference in the sensitivity analysis framework. We decompose the multiple imputation estimator into a martingale series based on the sequential construction of the estimator and propose the wild bootstrap inference by resampling the martingale series. The new bootstrap inference has a theoretical guarantee for consistency and is computationally efficient compared to the nonparametric bootstrap counterpart. We evaluate the finite-sample performance of the proposed SMIM through simulation and an application on an HIV clinical trial.  相似文献   

4.
Zhao and Tsiatis (1997) consider the problem of estimation of the distribution of the quality-adjusted lifetime when the chronological survival time is subject to right censoring. The quality-adjusted lifetime is typically defined as a weighted sum of the times spent in certain states up until death or some other failure time. They propose an estimator and establish the relevant asymptotics under the assumption of independent censoring. In this paper we extend the data structure with a covariate process observed until the end of follow-up and identify the optimal estimation problem. Because of the curse of dimensionality, no globally efficient nonparametric estimators, which have a good practical performance at moderate sample sizes, exist. Given a correctly specified model for the hazard of censoring conditional on the observed quality-of-life and covariate processes, we propose a closed-form one-step estimator of the distribution of the quality-adjusted lifetime whose asymptotic variance attains the efficiency bound if we can correctly specify a lower-dimensional working model for the conditional distribution of quality-adjusted lifetime given the observed quality-of-life and covariate processes. The estimator remains consistent and asymptotically normal even if this latter submodel is misspecified. The practical performance of the estimators is illustrated with a simulation study. We also extend our proposed one-step estimator to the case where treatment assignment is confounded by observed risk factors so that this estimator can be used to test a treatment effect in an observational study.  相似文献   

5.
In risk assessment and environmental monitoring studies, concentration measurements frequently fall below detection limits (DL) of measuring instruments, resulting in left-censored data. The principal approaches for handling censored data include the substitution-based method, maximum likelihood estimation, robust regression on order statistics, and Kaplan-Meier. In practice, censored data are substituted with an arbitrary value prior to use of traditional statistical methods. Although some studies have evaluated the substitution performance in estimating population characteristics, they have focused mainly on normally and lognormally distributed data that contain a single DL. We employ Monte Carlo simulations to assess the impact of substitution when estimating population parameters based on censored data containing multiple DLs. We also consider different distributional assumptions including lognormal, Weibull, and gamma. We show that the reliability of the estimates after substitution is highly sensitive to distributional characteristics such as mean, standard deviation, skewness, and also data characteristics such as censoring percentage. The results highlight that although the performance of the substitution-based method improves as the censoring percentage decreases, its performance still depends on the population's distributional characteristics. Practical implications that follow from our findings indicate that caution must be taken in using the substitution method when analyzing censored environmental data.  相似文献   

6.
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions.  相似文献   

7.
We investigate the use of follow-up samples of individuals to estimate survival curves from studies that are subject to right censoring from two sources: (i) early termination of the study, namely, administrative censoring, or (ii) censoring due to lost data prior to administrative censoring, so-called dropout. We assume that, for the full cohort of individuals, administrative censoring times are independent of the subjects' inherent characteristics, including survival time. To address the loss to censoring due to dropout, which we allow to be possibly selective, we consider an intensive second phase of the study where a representative sample of the originally lost subjects is subsequently followed and their data recorded. As with double-sampling designs in survey methodology, the objective is to provide data on a representative subset of the dropouts. Despite assumed full response from the follow-up sample, we show that, in general in our setting, administrative censoring times are not independent of survival times within the two subgroups, nondropouts and sampled dropouts. As a result, the stratified Kaplan-Meier estimator is not appropriate for the cohort survival curve. Moreover, using the concept of potential outcomes, as opposed to observed outcomes, and thereby explicitly formulating the problem as a missing data problem, reveals and addresses these complications. We present an estimation method based on the likelihood of an easily observed subset of the data and study its properties analytically for large samples. We evaluate our method in a realistic situation by simulating data that match published margins on survival and dropout from an actual hip-replacement study. Limitations and extensions of our design and analytic method are discussed.  相似文献   

8.
Chang SH 《Biometrics》2000,56(1):183-189
A longitudinal study is conducted to compare the process of particular disease between two groups. The process of the disease is monitored according to which of several ordered events occur. In the paper, the sojourn time between two successive events is considered as the outcome of interest. The group effects on the sojourn times of the multiple events are parameterized by scale changes in a semiparametric accelerated failure time model where the dependence structure among the multivariate sojourn times is unspecified. Suppose that the sojourn times are subject to dependent censoring and the censoring times are observed for all subjects. A log-rank-type estimating approach by rescaling the sojourn times and the dependent censoring times into the same distribution is constructed to estimate the group effects and the corresponding estimators are consistent and asymptotically normal. Without the dependent censoring, the independent censoring times in general are not available for the uncensored data. In order to complete the censoring information, pseudo-censoring times are generated from the corresponding nonparametrically estimated survival function in each group, and we can still obtained unbiased estimating functions for the group effects. A real application and a simulation study are conducted to illustrate the proposed methods.  相似文献   

9.
We consider lifetime data involving pairs of study individuals with more than one possible cause of failure for each individual. Non-parametric estimation of cause-specific distribution functions is considered under independent censoring. Properties of the estimators are discussed and an illustration of their application is given.  相似文献   

10.
Huang X  Zhang N 《Biometrics》2008,64(4):1090-1099
SUMMARY: In clinical studies, when censoring is caused by competing risks or patient withdrawal, there is always a concern about the validity of treatment effect estimates that are obtained under the assumption of independent censoring. Because dependent censoring is nonidentifiable without additional information, the best we can do is a sensitivity analysis to assess the changes of parameter estimates under different assumptions about the association between failure and censoring. This analysis is especially useful when knowledge about such association is available through literature review or expert opinions. In a regression analysis setting, the consequences of falsely assuming independent censoring on parameter estimates are not clear. Neither the direction nor the magnitude of the potential bias can be easily predicted. We provide an approach to do sensitivity analysis for the widely used Cox proportional hazards models. The joint distribution of the failure and censoring times is assumed to be a function of their marginal distributions. This function is called a copula. Under this assumption, we propose an iteration algorithm to estimate the regression parameters and marginal survival functions. Simulation studies show that this algorithm works well. We apply the proposed sensitivity analysis approach to the data from an AIDS clinical trial in which 27% of the patients withdrew due to toxicity or at the request of the patient or investigator.  相似文献   

11.
Accurate prediction of species distributions based on sampling and environmental data is essential for further scientific analysis, such as stock assessment, detection of abundance fluctuation due to climate change or overexploitation, and to underpin management and legislation processes. The evolution of computer science and statistics has allowed the development of sophisticated and well-established modelling techniques as well as a variety of promising innovative approaches for modelling species distribution. The appropriate selection of modelling approach is crucial to the quality of predictions about species distribution. In this study, modelling techniques based on different approaches are compared and evaluated in relation to their predictive performance, utilizing fish density acoustic data. Generalized additive models and mixed models amongst the regression models, associative neural networks (ANNs) and artificial neural networks ensemble amongst the artificial neural networks and ordinary kriging amongst the geostatistical techniques are applied and evaluated. A verification dataset is used for estimating the predictive performance of these models. A combination of outputs from the different models is applied for prediction optimization to exploit the ability of each model to explain certain aspects of variation in species acoustic density. Neural networks and especially ANNs appear to provide more accurate results in fitting the training dataset while generalized additive models appear more flexible in predicting the verification dataset. The efficiency of each technique in relation to certain sampling and output strategies is also discussed.  相似文献   

12.
Synopsis A literature review showed that numerous studies have dealt with the estimation of fish daily ration in the field. Comparisons of results from different studies are often difficult due to the use of different approaches and methods for parameter estimations. The objective of the present study was to compare the most commonly used approaches to estimate fish daily ration and to propose a standardized procedure for their estimation in the field. Comparisons were based on a field experiment specifically designed to investigate these questions and on data and theoretical considerations found in the literature. The results showed that (1) the gut fullness computed with entire digestive tract content is preferable to the stomach content only, supporting recent research done on other fish species; (2) it is important to consider the data distribution before estimating parameters; (3) estimates of experimental evacuation rates should be used rather than maximum evacuation rate for species showing no feeding periodicity; (4) it is necessary to exclude parasites from gut content in the computation of daily ration as they may significantly decrease daily ration estimates (by an average of 29.3% in this study); and (5) the Eggers (1977) model is as appropriate as, and less complex than, the Elliott & Persson (1978) model for estimating fish daily ration in the field, again supporting recent experiments done on other fish species.  相似文献   

13.
We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator.  相似文献   

14.
Matsui S 《Biometrics》2004,60(4):965-976
This article develops randomization-based methods for times to repeated events in two-arm randomized trials with noncompliance and dependent censoring. Structural accelerated failure time models are assumed to capture causal effects on repeated event times and dependent censoring time, but the dependence structure among repeated event times and dependent censoring time is unspecified. Artificial censoring techniques to accommodate nonrandom noncompliance and dependent censoring are proposed. Estimation of the acceleration parameters are based on rank-based estimating functions. A simulation study is conducted to evaluate the performance of the developed methods. An illustration of the methods using data from an acute myeloid leukemia trial is provided.  相似文献   

15.
The method of generalized pairwise comparisons (GPC) is an extension of the well-known nonparametric Wilcoxon–Mann–Whitney test for comparing two groups of observations. Multiple generalizations of Wilcoxon–Mann–Whitney test and other GPC methods have been proposed over the years to handle censored data. These methods apply different approaches to handling loss of information due to censoring: ignoring noninformative pairwise comparisons due to censoring (Gehan, Harrell, and Buyse); imputation using estimates of the survival distribution (Efron, Péron, and Latta); or inverse probability of censoring weighting (IPCW, Datta and Dong). Based on the GPC statistic, a measure of treatment effect, the “net benefit,” can be defined. It quantifies the difference between the probabilities that a randomly selected individual from one group is doing better than an individual from the other group. This paper aims at evaluating GPC methods for censored data, both in the context of hypothesis testing and estimation, and providing recommendations related to their choice in various situations. The methods that ignore uninformative pairs have comparable power to more complex and computationally demanding methods in situations of low censoring, and are slightly superior for high proportions (>40%) of censoring. If one is interested in estimation of the net benefit, Harrell's c index is an unbiased estimator if the proportional hazards assumption holds. Otherwise, the imputation (Efron or Peron) or IPCW (Datta, Dong) methods provide unbiased estimators in case of proportions of drop-out censoring up to 60%.  相似文献   

16.
Summary In medical studies of time‐to‐event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time‐dependent treatment effects is to model the time‐dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse‐weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment‐specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite‐sample properties through simulation. The proposed methods are used to compare kidney wait‐list mortality by race.  相似文献   

17.
Hsu CH  Li Y  Long Q  Zhao Q  Lance P 《PloS one》2011,6(10):e25141
In colorectal polyp prevention trials, estimation of the rate of recurrence of adenomas at the end of the trial may be complicated by dependent censoring, that is, time to follow-up colonoscopy and dropout may be dependent on time to recurrence. Assuming that the auxiliary variables capture the dependence between recurrence and censoring times, we propose to fit two working models with the auxiliary variables as covariates to define risk groups and then extend an existing weighted logistic regression method for independent censoring to each risk group to accommodate potential dependent censoring. In a simulation study, we show that the proposed method results in both a gain in efficiency and reduction in bias for estimating the recurrence rate. We illustrate the methodology by analyzing a recurrent adenoma dataset from a colorectal polyp prevention trial.  相似文献   

18.
Hsieh JJ  Ding AA  Wang W 《Biometrics》2011,67(3):719-729
Summary Recurrent events data are commonly seen in longitudinal follow‐up studies. Dependent censoring often occurs due to death or exclusion from the study related to the disease process. In this article, we assume flexible marginal regression models on the recurrence process and the dependent censoring time without specifying their dependence structure. The proposed model generalizes the approach by Ghosh and Lin (2003, Biometrics 59, 877–885). The technique of artificial censoring provides a way to maintain the homogeneity of the hypothetical error variables under dependent censoring. Here we propose to apply this technique to two Gehan‐type statistics. One considers only order information for pairs whereas the other utilizes additional information of observed censoring times available for recurrence data. A model‐checking procedure is also proposed to assess the adequacy of the fitted model. The proposed estimators have good asymptotic properties. Their finite‐sample performances are examined via simulations. Finally, the proposed methods are applied to analyze the AIDS linked to the intravenous experiences cohort data.  相似文献   

19.
Zexi Cai  Tony Sit 《Biometrics》2020,76(4):1201-1215
Quantile regression is a flexible and effective tool for modeling survival data and its relationship with important covariates, which often vary over time. Informative right censoring of data from the prevalent cohort within the population often results in length-biased observations. We propose an estimating equation-based approach to obtain consistent estimators of the regression coefficients of interest based on length-biased observations with time-dependent covariates. In addition, inspired by Zeng and Lin 2008, we also develop a more numerically stable procedure for variance estimation. Large sample properties including consistency and asymptotic normality of the proposed estimator are established. Numerical studies presented demonstrate convincing performance of the proposed estimator under various settings. The application of the proposed method is demonstrated using the Oscar dataset.  相似文献   

20.
Pan W  Zeng D 《Biometrics》2011,67(3):996-1006
We study the estimation of mean medical cost when censoring is dependent and a large amount of auxiliary information is present. Under missing at random assumption, we propose semiparametric working models to obtain low-dimensional summarized scores. An estimator for the mean total cost can be derived nonparametrically conditional on the summarized scores. We show that when either the two working models for cost-survival process or the model for censoring distribution is correct, the estimator is consistent and asymptotically normal. Small-sample performance of the proposed method is evaluated via simulation studies. Finally, our approach is applied to analyze a real data set in health economics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号