首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The analysis of failure times in the presence of competing risks.   总被引:15,自引:0,他引:15  
Distinct problems in the analysis of failure times with competing causes of failure include the estimation of treatment or exposure effects on specific failure types, the study of interrelations among failure types, and the estimation of failure rates for some causes given the removal of certain other failure types. The usual formation of these problems is in terms of conceptual or latent failure times for each failure type. This approach is criticized on the basis of unwarranted assumptions, lack of physical interpretation and identifiability problems. An alternative approach utilizing cause-specific hazard functions for observable quantities, including time-dependent covariates, is proposed. Cause-specific hazard functions are shown to be the basic estimable quantities in the competing risks framework. A method, involving the estimation of parameters that relate time-dependent risk indicators for some causes to cause-specific hazard functions for other causes, is proposed for the study of interrelations among failure types. Further, it is argued that the problem of estimation of failure rates under the removal of certain causes is not well posed until a mechanism for cause removal is specified. Following such a specification, one will sometimes be in a position to make sensible extrapolations from available data to situations involving cause removal. A clinical program in bone marrow transplantation for leukemia provides a setting for discussion and illustration of each of these ideas. Failure due to censoring in a survivorship study leads to further discussion.  相似文献   

2.
Menggang Yu  Bin Nan 《Biometrics》2010,66(2):405-414
Summary In large cohort studies, it often happens that some covariates are expensive to measure and hence only measured on a validation set. On the other hand, relatively cheap but error‐prone measurements of the covariates are available for all subjects. Regression calibration (RC) estimation method ( Prentice, 1982 , Biometrika 69 , 331–342) is a popular method for analyzing such data and has been applied to the Cox model by Wang et al. (1997, Biometrics 53 , 131–145) under normal measurement error and rare disease assumptions. In this article, we consider the RC estimation method for the semiparametric accelerated failure time model with covariates subject to measurement error. Asymptotic properties of the proposed method are investigated under a two‐phase sampling scheme for validation data that are selected via stratified random sampling, resulting in neither independent nor identically distributed observations. We show that the estimates converge to some well‐defined parameters. In particular, unbiased estimation is feasible under additive normal measurement error models for normal covariates and under Berkson error models. The proposed method performs well in finite‐sample simulation studies. We also apply the proposed method to a depression mortality study.  相似文献   

3.
We propose a general family of mixture hazard models to analyze lifetime data associated with bathtub and multimodal hazard functions. With this model we have a great flexibility for fitting lifetime data. Its version with covariates has the proportional hazard and the accelerated failure time models as special cases. A Bayesian analysis is presented for the model using informative priors, using sampling‐based approaches to perform the Bayesian computations. A real example with a medical data illustrates the methodology.  相似文献   

4.
Kim YJ 《Biometrics》2006,62(2):458-464
In doubly censored failure time data, the survival time of interest is defined as the elapsed time between an initial event and a subsequent event, and the occurrences of both events cannot be observed exactly. Instead, only right- or interval-censored observations on the occurrence times are available. For the analysis of such data, a number of methods have been proposed under the assumption that the survival time of interest is independent of the occurrence time of the initial event. This article investigates a different situation where the independence may not be true with the focus on regression analysis of doubly censored data. Cox frailty models are applied to describe the effects of covariates and an EM algorithm is developed for estimation. Simulation studies are performed to investigate finite sample properties of the proposed method and an illustrative example from an acquired immune deficiency syndrome (AIDS) cohort study is provided.  相似文献   

5.
An estimator of the hazard rate function from discrete failure time data is obtained by semiparametric smoothing of the (nonsmooth) maximum likelihood estimator, which is achieved by repeated multiplication of a Markov chain transition-type matrix. This matrix is constructed so as to have a given standard discrete parametric hazard rate model, termed the vehicle model, as its stationary hazard rate. As with the discrete density estimation case, the proposed estimator gives improved performance when the vehicle model is a good one and otherwise provides a nonparametric method comparable to the only purely nonparametric smoother discussed in the literature. The proposed semiparametric smoothing approach is then extended to hazard models with covariates and is illustrated by applications to simulated and real data sets.  相似文献   

6.
Grigoletto M  Akritas MG 《Biometrics》1999,55(4):1177-1187
We propose a method for fitting semiparametric models such as the proportional hazards (PH), additive risks (AR), and proportional odds (PO) models. Each of these semiparametric models implies that some transformation of the conditional cumulative hazard function (at each t) depends linearly on the covariates. The proposed method is based on nonparametric estimation of the conditional cumulative hazard function, forming a weighted average over a range of t-values, and subsequent use of least squares to estimate the parameters suggested by each model. An approximation to the optimal weight function is given. This allows semiparametric models to be fitted even in incomplete data cases where the partial likelihood fails (e.g., left censoring, right truncation). However, the main advantage of this method rests in the fact that neither the interpretation of the parameters nor the validity of the analysis depend on the appropriateness of the PH or any of the other semiparametric models. In fact, we propose an integrated method for data analysis where the role of the various semiparametric models is to suggest the best fitting transformation. A single continuous covariate and several categorical covariates (factors) are allowed. Simulation studies indicate that the test statistics and confidence intervals have good small-sample performance. A real data set is analyzed.  相似文献   

7.
Ma S  Kosorok MR  Fine JP 《Biometrics》2006,62(1):202-210
As a useful alternative to Cox's proportional hazard model, the additive risk model assumes that the hazard function is the sum of the baseline hazard function and the regression function of covariates. This article is concerned with estimation and prediction for the additive risk models with right censored survival data, especially when the dimension of the covariates is comparable to or larger than the sample size. Principal component regression is proposed to give unique and numerically stable estimators. Asymptotic properties of the proposed estimators, component selection based on the weighted bootstrap, and model evaluation techniques are discussed. This approach is illustrated with analysis of the primary biliary cirrhosis clinical data and the diffuse large B-cell lymphoma genomic data. It is shown that this methodology is numerically stable and effective in dimension reduction, while still being able to provide satisfactory prediction and classification results.  相似文献   

8.
9.
In the study of multiple failure time data with recurrent clinical endpoints, the classical independent censoring assumption in survival analysis can be violated when the evolution of the recurrent events is correlated with a censoring mechanism such as death. Moreover, in some situations, a cure fraction appears in the data because a tangible proportion of the study population benefits from treatment and becomes recurrence free and insusceptible to death related to the disease. A bivariate joint frailty mixture cure model is proposed to allow for dependent censoring and cure fraction in recurrent event data. The latency part of the model consists of two intensity functions for the hazard rates of recurrent events and death, wherein a bivariate frailty is introduced by means of the generalized linear mixed model methodology to adjust for dependent censoring. The model allows covariates and frailties in both the incidence and the latency parts, and it further accounts for the possibility of cure after each recurrence. It includes the joint frailty model and other related models as special cases. An expectation-maximization (EM)-type algorithm is developed to provide residual maximum likelihood estimation of model parameters. Through simulation studies, the performance of the model is investigated under different magnitudes of dependent censoring and cure rate. The model is applied to data sets from two colorectal cancer studies to illustrate its practical value.  相似文献   

10.
A method for fitting parametric models to apparently complex hazard rates in survival data is suggested. Hazard complexity may indicate competing causes of failure. A competing risks model is constructed on the assumption that a failure time can be considered as the first passage time of possibly several latent, stochastic processes competing in reaching a barrier. An additional assumption of independence between the hidden processes leads directly to a composite hazard function as the sum of the cause specific hazards. We show how this composite hazard model based on Wiener processes can serve as a flexible tool for modelling complex hazards by varying the number of processes and their starting conditions. An example with real data is presented. Parameter estimation and model assessment are based on Markov Chain Monte Carlo methods. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
Person‐time incidence rates are frequently used in medical research. However, standard estimation theory for this measure of event occurrence is based on the assumption of independent and identically distributed (iid) exponential event times, which implies that the hazard function remains constant over time. Under this assumption and assuming independent censoring, observed person‐time incidence rate is the maximum‐likelihood estimator of the constant hazard, and asymptotic variance of the log rate can be estimated consistently by the inverse of the number of events. However, in many practical applications, the assumption of constant hazard is not very plausible. In the present paper, an average rate parameter is defined as the ratio of expected event count to the expected total time at risk. This rate parameter is equal to the hazard function under constant hazard. For inference about the average rate parameter, an asymptotically robust variance estimator of the log rate is proposed. Given some very general conditions, the robust variance estimator is consistent under arbitrary iid event times, and is also consistent or asymptotically conservative when event times are independent but nonidentically distributed. In contrast, the standard maximum‐likelihood estimator may become anticonservative under nonconstant hazard, producing confidence intervals with less‐than‐nominal asymptotic coverage. These results are derived analytically and illustrated with simulations. The two estimators are also compared in five datasets from oncology studies.  相似文献   

12.
Large-scale surveys, such as national forest inventories and vegetation monitoring programs, usually have complex sampling designs that include geographical stratification and units organized in clusters. When models are developed using data from such programs, a key question is whether or not to utilize design information when analyzing the relationship between a response variable and a set of covariates. Standard statistical regression methods often fail to account for complex sampling designs, which may lead to severely biased estimators of model coefficients. Furthermore, ignoring that data are spatially correlated within clusters may underestimate the standard errors of regression coefficient estimates, with a risk for drawing wrong conclusions. We first review general approaches that account for complex sampling designs, e.g. methods using probability weighting, and stress the need to explore the effects of the sampling design when applying logistic regression models. We then use Monte Carlo simulation to compare the performance of the standard logistic regression model with two approaches to model correlated binary responses, i.e. cluster-specific and population-averaged logistic regression models. As an example, we analyze the occurrence of epiphytic hair lichens in the genus Bryoria; an indicator of forest ecosystem integrity. Based on data from the National Forest Inventory (NFI) for the period 1993–2014 we generated a data set on hair lichen occurrence on  >100,000 Picea abies trees distributed throughout Sweden. The NFI data included ten covariates representing forest structure and climate variables potentially affecting lichen occurrence. Our analyses show the importance of taking complex sampling designs and correlated binary responses into account in logistic regression modeling to avoid the risk of obtaining notably biased parameter estimators and standard errors, and erroneous interpretations about factors affecting e.g. hair lichen occurrence. We recommend comparisons of unweighted and weighted logistic regression analyses as an essential step in development of models based on data from large-scale surveys.  相似文献   

13.
Parametric and semiparametric cure models have been proposed for cure proportion estimation in cancer clinical research. In this paper, several parametric and semiparametric models are compared, and their estimation methods are discussed within the framework of the EM algorithm. We show that the semiparametric PH cure model can achieve efficiency levels similar to those of parametric cure models, provided that the failure time distribution is well specified and uncured patients have an increasing hazard rate. Therefore the semiparametric model is a viable alternative to parametric cure models. When the hazard rate of uncured patients is rapidly decreasing, the estimates from the semiparametric cure model tend to have large variations and biases. However, all other models also tend to have large variations and biases in this case.  相似文献   

14.
Ross EA  Moore D 《Biometrics》1999,55(3):813-819
We have developed methods for modeling discrete or grouped time, right-censored survival data collected from correlated groups or clusters. We assume that the marginal hazard of failure for individual items within a cluster is specified by a linear log odds survival model and the dependence structure is based on a gamma frailty model. The dependence can be modeled as a function of cluster-level covariates. Likelihood equations for estimating the model parameters are provided. Generalized estimating equations for the marginal hazard regression parameters and pseudolikelihood methods for estimating the dependence parameters are also described. Data from two clinical trials are used for illustration purposes.  相似文献   

15.
Censored quantile regression models, which offer great flexibility in assessing covariate effects on event times, have attracted considerable research interest. In this study, we consider flexible estimation and inference procedures for competing risks quantile regression, which not only provides meaningful interpretations by using cumulative incidence quantiles but also extends the conventional accelerated failure time model by relaxing some of the stringent model assumptions, such as global linearity and unconditional independence. Current method for censored quantile regressions often involves the minimization of the L1‐type convex function or solving the nonsmoothed estimating equations. This approach could lead to multiple roots in practical settings, particularly with multiple covariates. Moreover, variance estimation involves an unknown error distribution and most methods rely on computationally intensive resampling techniques such as bootstrapping. We consider the induced smoothing procedure for censored quantile regressions to the competing risks setting. The proposed procedure permits the fast and accurate computation of quantile regression parameter estimates and standard variances by using conventional numerical methods such as the Newton–Raphson algorithm. Numerical studies show that the proposed estimators perform well and the resulting inference is reliable in practical settings. The method is finally applied to data from a soft tissue sarcoma study.  相似文献   

16.
The rates of functional recovery after stroke tend to decrease with time. Time-varying Markov processes (TVMP) may be more biologically plausible than time-invariant Markov process for modeling such data. However, analysis of such stochastic processes, particularly tackling reversible transitions and the incorporation of random effects into models, can be analytically intractable. We make use of ordinary differential equations to solve continuous-time TVMP with reversible transitions. The proportional hazard form was used to assess the effects of an individual’s covariates on multi-state transitions with the incorporation of random effects that capture the residual variation after being explained by measured covariates under the concept of generalized linear model. We further built up Bayesian directed acyclic graphic model to obtain full joint posterior distribution. Markov chain Monte Carlo (MCMC) with Gibbs sampling was applied to estimate parameters based on posterior marginal distributions with multiple integrands. The proposed method was illustrated with empirical data from a study on the functional recovery after stroke.  相似文献   

17.
D P Byar  N Mantel 《Biometrics》1975,31(4):943-947
Interrelationships among three response-time models which incorporate covariate information are explored. The most general of these models is the logistic-exponential in which the log odds of the probability of responding in a fixed interval is assumed to be a linear function of the covariates; this model includes a parameter W for the width of discrete time intervals in which responses occur. As W leads to O this model is equivalent to a continuous time exponential model in which the log hazard is linear in the covariates. As W leads to infininity it is equivalent to a continuous time exponential model in which the hazard itself is a linear function of the covariates. This second model was fitted to the data used in an earlier publication describing the logistic exponential model, and very close agreement of the estimates of the regression coefficients is demonstrated.  相似文献   

18.
Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook’s notion of an independent variable hull (IVH), developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH) can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models).  相似文献   

19.
Electronic particle counters have gained widespread acceptance as a means to measure osmotic properties of cell membranes. Because most current instruments do not allow for the collection of true volume as a function of time data, investigators use older models such as the MultiSizer II sampling stand. A significant drawback to this and other older models is that they rely on mercury to maintain a constant pressure and to connect electrodes. The presence of mercury is a human health hazard that is exacerbated by the sometimes irregular vacuum pressures that cause mercury spills inside of the machine. To eliminate this hazard, we have determined that the MultiSizer II model can be simply and easily modified to function and collect temporal volume data without the use of mercury.  相似文献   

20.
Hsu L  Chen L  Gorfine M  Malone K 《Biometrics》2004,60(4):936-944
Estimating marginal hazard function from the correlated failure time data arising from case-control family studies is complicated by noncohort study design and risk heterogeneity due to unmeasured, shared risk factors among the family members. Accounting for both factors in this article, we propose a two-stage estimation procedure. At the first stage, we estimate the dependence parameter in the distribution for the risk heterogeneity without obtaining the marginal distribution first or simultaneously. Assuming that the dependence parameter is known, at the second stage we estimate the marginal hazard function by iterating between estimation of the risk heterogeneity (frailty) for each family and maximization of the partial likelihood function with an offset to account for the risk heterogeneity. We also propose an iterative procedure to improve the efficiency of the dependence parameter estimate. The simulation study shows that both methods perform well under finite sample sizes. We illustrate the method with a case-control family study of early onset breast cancer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号