首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There is considerable debate regarding the choice of test for treatment difference in a randomized clinical trial in the presence of competing risks. This question arose in the study of standard and new antiepileptic drugs (SANAD) trial comparing new and standard antiepileptic drugs. This paper provides simulation results for the log-rank test comparing cause-specific hazard rates and Gray's test comparing cause-specific cumulative incidence curves. To inform the analysis of the SANAD trial, competing-risks settings were considered where both events are of interest, events may be negatively correlated, and the degree of correlation may differ in the 2 treatment groups. In settings where there are effects in opposite directions for the 2 event types, a likely situation for the SANAD trial, Gray's test has greater power to detect treatment differences than log-rank analysis. For the epilepsy application, conclusions were qualitatively similar for both log-rank and Gray's tests.  相似文献   

2.
Semiparametric models for cumulative incidence functions   总被引:1,自引:0,他引:1  
Bryant J  Dignam JJ 《Biometrics》2004,60(1):182-190
In analyses of time-to-failure data with competing risks, cumulative incidence functions may be used to estimate the time-dependent cumulative probability of failure due to specific causes. These functions are commonly estimated using nonparametric methods, but in cases where events due to the cause of primary interest are infrequent relative to other modes of failure, nonparametric methods may result in rather imprecise estimates for the corresponding subdistribution. In such cases, it may be possible to model the cause-specific hazard of primary interest parametrically, while accounting for the other modes of failure using nonparametric estimators. The cumulative incidence estimators so obtained are simple to compute and are considerably more efficient than the usual nonparametric estimator, particularly with regard to interpolation of cumulative incidence at early or intermediate time points within the range of data used to fit the function. More surprisingly, they are often nearly as efficient as fully parametric estimators. We illustrate the utility of this approach in the analysis of patients treated for early stage breast cancer.  相似文献   

3.
Nonparametric quantile inference for competing risks has recentlybeen studied by Peng & Fine (2007). Their key result establishesuniform consistency and weak convergence of the inverse of theAalen–Johansen estimator of the cumulative incidence function,using the representation of the cumulative incidence estimatoras a sum of independent and identically distributed random variables.The limit process is of a form similar to that of the standardsurvival result, but with the cause-specific hazard of interestreplacing the all-causes hazard. We show that this fact is nota coincidence, but can be derived from a general Hadamard differentiationresult. We discuss a simplified proof and extensions of theapproach to more complex multistate models. As a further consequence,we find that the bootstrap works.  相似文献   

4.
Analysis of cumulative incidence (sometimes called absolute risk or crude risk) can be difficult if the cause of failure is missing for some subjects. Assuming missingness is random conditional on the observed data, we develop asymptotic theory for multiple imputation methods to estimate cumulative incidence. Covariates affect cause-specific hazards in our model, and we assume that separate proportional hazards models hold for each cause-specific hazard. Simulation studies show that procedures based on asymptotic theory have near nominal operating characteristics in cohorts of 200 and 400 subjects, both for cumulative incidence and for prediction error. The methods are illustrated with data on survival after breast cancer, obtained from the National Surgical Adjuvant Breast and Bowel Project (NSABP).  相似文献   

5.
Shen Y  Cheng SC 《Biometrics》1999,55(4):1093-1100
In the context of competing risks, the cumulative incidence function is often used to summarize the cause-specific failure-time data. As an alternative to the proportional hazards model, the additive risk model is used to investigate covariate effects by specifying that the subject-specific hazard function is the sum of a baseline hazard function and a regression function of covariates. Based on such a formulation, we present an approach to constructing simultaneous confidence intervals for the cause-specific cumulative incidence function of patients with given risk factors. A melanoma data set is used for the purpose of illustration.  相似文献   

6.
The cross-odds ratio is defined as the ratio of the conditional odds of the occurrence of one cause-specific event for one subject given the occurrence of the same or a different cause-specific event for another subject in the same cluster over the unconditional odds of occurrence of the cause-specific event. It is a measure of the association between the correlated cause-specific failure times within a cluster. The joint cumulative incidence function can be expressed as a function of the marginal cumulative incidence functions and the cross-odds ratio. Assuming that the marginal cumulative incidence functions follow a generalized semiparametric model, this paper studies the parametric regression modeling of the cross-odds ratio. A set of estimating equations are proposed for the unknown parameters and the asymptotic properties of the estimators are explored. Non-parametric estimation of the cross-odds ratio is also discussed. The proposed procedures are applied to the Danish twin data to model the associations between twins in their times to natural menopause and to investigate whether the association differs among monozygotic and dizygotic twins and how these associations have changed over time.  相似文献   

7.
Klein JP  Andersen PK 《Biometrics》2005,61(1):223-229
Typically, regression models for competing risks outcomes are based on proportional hazards models for the crude hazard rates. These estimates often do not agree with impressions drawn from plots of cumulative incidence functions for each level of a risk factor. We present a technique which models the cumulative incidence functions directly. The method is based on the pseudovalues from a jackknife statistic constructed from the cumulative incidence curve. These pseudovalues are used in a generalized estimating equation to obtain estimates of model parameters. We study the properties of this estimator and apply the technique to a study of the effect of alternative donors on relapse for patients given a bone marrow transplant for leukemia.  相似文献   

8.
Many research questions involve time-to-event outcomes that can be prevented from occurring due to competing events. In these settings, we must be careful about the causal interpretation of classical statistical estimands. In particular, estimands on the hazard scale, such as ratios of cause-specific or subdistribution hazards, are fundamentally hard to interpret causally. Estimands on the risk scale, such as contrasts of cumulative incidence functions, do have a clear causal interpretation, but they only capture the total effect of the treatment on the event of interest; that is, effects both through and outside of the competing event. To disentangle causal treatment effects on the event of interest and competing events, the separable direct and indirect effects were recently introduced. Here we provide new results on the estimation of direct and indirect separable effects in continuous time. In particular, we derive the nonparametric influence function in continuous time and use it to construct an estimator that has certain robustness properties. We also propose a simple estimator based on semiparametric models for the two cause-specific hazard functions. We describe the asymptotic properties of these estimators and present results from simulation studies, suggesting that the estimators behave satisfactorily in finite samples. Finally, we reanalyze the prostate cancer trial from Stensrud et al. (2020).  相似文献   

9.
The analysis of failure times in the presence of competing risks.   总被引:15,自引:0,他引:15  
Distinct problems in the analysis of failure times with competing causes of failure include the estimation of treatment or exposure effects on specific failure types, the study of interrelations among failure types, and the estimation of failure rates for some causes given the removal of certain other failure types. The usual formation of these problems is in terms of conceptual or latent failure times for each failure type. This approach is criticized on the basis of unwarranted assumptions, lack of physical interpretation and identifiability problems. An alternative approach utilizing cause-specific hazard functions for observable quantities, including time-dependent covariates, is proposed. Cause-specific hazard functions are shown to be the basic estimable quantities in the competing risks framework. A method, involving the estimation of parameters that relate time-dependent risk indicators for some causes to cause-specific hazard functions for other causes, is proposed for the study of interrelations among failure types. Further, it is argued that the problem of estimation of failure rates under the removal of certain causes is not well posed until a mechanism for cause removal is specified. Following such a specification, one will sometimes be in a position to make sensible extrapolations from available data to situations involving cause removal. A clinical program in bone marrow transplantation for leukemia provides a setting for discussion and illustration of each of these ideas. Failure due to censoring in a survivorship study leads to further discussion.  相似文献   

10.
A cause-specific cumulative incidence function (CIF) is the probability of failure from a specific cause as a function of time. In randomized trials, a difference of cause-specific CIFs (treatment minus control) represents a treatment effect. Cause-specific CIF in each intervention arm can be estimated based on the usual non-parametric Aalen–Johansen estimator which generalizes the Kaplan–Meier estimator of CIF in the presence of competing risks. Under random censoring, asymptotically valid Wald-type confidence intervals (CIs) for a difference of cause-specific CIFs at a specific time point can be constructed using one of the published variance estimators. Unfortunately, these intervals can suffer from substantial under-coverage when the outcome of interest is a rare event, as may be the case for example in the analysis of uncommon adverse events. We propose two new approximate interval estimators for a difference of cause-specific CIFs estimated in the presence of competing risks and random censoring. Theoretical analysis and simulations indicate that the new interval estimators are superior to the Wald CIs in the sense of avoiding substantial under-coverage with rare events, while being equivalent to the Wald CIs asymptotically. In the absence of censoring, one of the two proposed interval estimators reduces to the well-known Agresti–Caffo CI for a difference of two binomial parameters. The new methods can be easily implemented with any software package producing point and variance estimates for the Aalen–Johansen estimator, as illustrated in a real data example.  相似文献   

11.
The Fine–Gray proportional subdistribution hazards model has been puzzling many people since its introduction. The main reason for the uneasy feeling is that the approach considers individuals still at risk for an event of cause 1 after they fell victim to the competing risk of cause 2. The subdistribution hazard and the extended risk sets, where subjects who failed of the competing risk remain in the risk set, are generally perceived as unnatural . One could say it is somewhat of a riddle why the Fine–Gray approach yields valid inference. To take away these uneasy feelings, we explore the link between the Fine–Gray and cause-specific approaches in more detail. We introduce the reduction factor as representing the proportion of subjects in the Fine–Gray risk set that has not yet experienced a competing event. In the presence of covariates, the dependence of the reduction factor on a covariate gives information on how the effect of the covariate on the cause-specific hazard and the subdistribution hazard relate. We discuss estimation and modeling of the reduction factor, and show how they can be used in various ways to estimate cumulative incidences, given the covariates. Methods are illustrated on data of the European Society for Blood and Marrow Transplantation.  相似文献   

12.
We propose parametric regression analysis of cumulative incidence function with competing risks data. A simple form of Gompertz distribution is used for the improper baseline subdistribution of the event of interest. Maximum likelihood inferences on regression parameters and associated cumulative incidence function are developed for parametric models, including a flexible generalized odds rate model. Estimation of the long-term proportion of patients with cause-specific events is straightforward in the parametric setting. Simple goodness-of-fit tests are discussed for evaluating a fixed odds rate assumption. The parametric regression methods are compared with an existing semiparametric regression analysis on a breast cancer data set where the cumulative incidence of recurrence is of interest. The results demonstrate that the likelihood-based parametric analyses for the cumulative incidence function are a practically useful alternative to the semiparametric analyses.  相似文献   

13.
In many clinical studies that involve follow-up, it is common to observe one or more sequences of longitudinal measurements, as well as one or more time to event outcomes. A competing risks situation arises when the probability of occurrence of one event is altered/hindered by another time to event. Recently, there has been much attention paid to the joint analysis of a single longitudinal response and a single time to event outcome, when the missing data mechanism in the longitudinal process is non-ignorable. We, in this paper, propose an extension where multiple longitudinal responses are jointly modeled with competing risks (multiple time to events). Our shared parameter joint model consists of a system of multiphase non-linear mixed effects sub-models for the multiple longitudinal responses, and a system of cause-specific non-proportional hazards frailty sub-models for competing risks, with associations among multiple longitudinal responses and competing risks modeled using latent parameters. The joint model is applied to a data set of patients who are on mechanical circulatory support and are awaiting heart transplant, using readily available software. While on the mechanical circulatory support, patient liver and renal functions may worsen and these in turn may influence one of the two possible competing outcomes: (i) death before transplant; (ii) transplant. In one application, we propose a system of multiphase cause-specific non-proportional hazard sub-model where frailty can be time varying. Performance under different scenarios was assessed using simulation studies. By using the proposed joint modeling of the multiphase sub-models, one can identify: (i) non-linear trends in multiple longitudinal outcomes; (ii) time-varying hazards and cumulative incidence functions of the competing risks; (iii) identify risk factors for the both types of outcomes, where the effect may or may not change with time; and (iv) assess the association between multiple longitudinal and competing risks outcomes, where the association may or may not change with time.  相似文献   

14.
Separate Cox analyses of all cause-specific hazards are the standard technique of choice to study the effect of a covariate in competing risks, but a synopsis of these results in terms of cumulative event probabilities is challenging. This difficulty has led to the development of the proportional subdistribution hazards model. If the covariate is known at baseline, the model allows for a summarizing assessment in terms of the cumulative incidence function. black Mathematically, the model also allows for including random time-dependent covariates, but practical implementation has remained unclear due to a certain risk set peculiarity. We use the intimate relationship of discrete covariates and multistate models to naturally treat time-dependent covariates within the subdistribution hazards framework. The methodology then straightforwardly translates to real-valued time-dependent covariates. As with classical survival analysis, including time-dependent covariates does not result in a model for probability functions anymore. Nevertheless, the proposed methodology provides a useful synthesis of separate cause-specific hazards analyses. We illustrate this with hospital infection data, where time-dependent covariates and competing risks are essential to the subject research question.  相似文献   

15.
Numerous statistical methods have been developed for analyzing high‐dimensional data. These methods often focus on variable selection approaches but are limited for the purpose of testing with high‐dimensional data. They are often required to have explicit‐likelihood functions. In this article, we propose a “hybrid omnibus test” for high‐dicmensional data testing purpose with much weaker requirements. Our hybrid omnibus test is developed under a semiparametric framework where a likelihood function is no longer necessary. Our test is a version of a frequentist‐Bayesian hybrid score‐type test for a generalized partially linear single‐index model, which has a link function being a function of a set of variables through a generalized partially linear single index. We propose an efficient score based on estimating equations, define local tests, and then construct our hybrid omnibus test using local tests. We compare our approach with an empirical‐likelihood ratio test and Bayesian inference based on Bayes factors, using simulation studies. Our simulation results suggest that our approach outperforms the others, in terms of type I error, power, and computational cost in both the low‐ and high‐dimensional cases. The advantage of our approach is demonstrated by applying it to genetic pathway data for type II diabetes mellitus.  相似文献   

16.
Summary Many time‐to‐event studies are complicated by the presence of competing risks and by nesting of individuals within a cluster, such as patients in the same center in a multicenter study. Several methods have been proposed for modeling the cumulative incidence function with independent observations. However, when subjects are clustered, one needs to account for the presence of a cluster effect either through frailty modeling of the hazard or subdistribution hazard, or by adjusting for the within‐cluster correlation in a marginal model. We propose a method for modeling the marginal cumulative incidence function directly. We compute leave‐one‐out pseudo‐observations from the cumulative incidence function at several time points. These are used in a generalized estimating equation to model the marginal cumulative incidence curve, and obtain consistent estimates of the model parameters. A sandwich variance estimator is derived to adjust for the within‐cluster correlation. The method is easy to implement using standard software once the pseudovalues are obtained, and is a generalization of several existing models. Simulation studies show that the method works well to adjust the SE for the within‐cluster correlation. We illustrate the method on a dataset looking at outcomes after bone marrow transplantation.  相似文献   

17.
Outcome misclassification occurs frequently in binary-outcome studies and can result in biased estimation of quantities such as the incidence, prevalence, cause-specific hazards, cumulative incidence functions, and so forth. A number of remedies have been proposed to address the potential misclassification of the outcomes in such data. The majority of these remedies lie in the estimation of misclassification probabilities, which are in turn used to adjust analyses for outcome misclassification. A number of authors advocate using a gold-standard procedure on a sample internal to the study to learn about the extent of the misclassification. With this type of internal validation, the problem of quantifying the misclassification also becomes a missing data problem as, by design, the true outcomes are only ascertained on a subset of the entire study sample. Although, the process of estimating misclassification probabilities appears simple conceptually, the estimation methods proposed so far have several methodological and practical shortcomings. Most methods rely on missing outcome data to be missing completely at random (MCAR), a rather stringent assumption which is unlikely to hold in practice. Some of the existing methods also tend to be computationally-intensive. To address these issues, we propose a computationally-efficient, easy-to-implement, pseudo-likelihood estimator of the misclassification probabilities under a missing at random (MAR) assumption, in studies with an available internal-validation sample. We present the estimator through the lens of studies with competing-risks outcomes, though the estimator extends beyond this setting. We describe the consistency and asymptotic distributional properties of the resulting estimator, and derive a closed-form estimator of its variance. The finite-sample performance of this estimator is evaluated via simulations. Using data from a real-world study with competing-risks outcomes, we illustrate how the proposed method can be used to estimate misclassification probabilities. We also show how the estimated misclassification probabilities can be used in an external study to adjust for possible misclassification bias when modeling cumulative incidence functions.  相似文献   

18.
In studies involving diseases associated with high rates of mortality, trials are frequently conducted to evaluate the effects of therapeutic interventions on recurrent event processes terminated by death. In this setting, cumulative mean functions form a natural basis for inference for questions of a health economic nature, and Ghosh and Lin (2000) recently proposed a relevant class of test statistics. Trials of patients with cancer metastatic to bone, however, involve multiple types of skeletal complications, each of which may be repeatedly experienced by patients over their lifetime. Traditionally the distinction between the various types of events is ignored and univariate analyses are conducted based on a composite recurrent event. However, when the events have different impacts on patients' quality of life, or when they incur different costs, it can be important to gain insight into the relative frequency of the specific types of events and treatment effects thereon. This may be achieved by conducting separate marginal analyses with each analysis focusing on one type of recurrent event. Global inferences regarding treatment benefit can then be achieved by carrying out multiplicity adjusted marginal tests, more formal multiple testing procedures, or by constructing global test statistics. We describe methods for testing for differences in mean functions between treatment groups which accommodate the fact that each particular event process is ultimately terminated by death. The methods are illustrated by application to a motivating study designed to examine the effect of bisphosphonate therapy on the incidence of skeletal complications among patients with breast cancer metastatic to bone. We find that there is a consistent trend towards a reduction in the cumulative mean for all four types of skeletal complications with bisphosphonate therapy; there is a significant reduction in the need for radiation therapy for the treatment of bone. The global test suggests that bisphosphonate therapy significantly reduces the overall number of skeletal complications.  相似文献   

19.
A continuous time discrete state cumulative damage process {X(t), t ≥ 0} is considered, based on a non‐homogeneous Poisson hit‐count process and discrete distribution of damage per hit, which can be negative binomial, Neyman type A, Polya‐Aeppli or Lagrangian Poisson. Intensity functions considered for the Poisson process comprise a flexible three‐parameter family. The survival function is S(t) = P(X(t) ≤ L) where L is fixed. Individual variation is accounted for within the construction for the initial damage distribution {P(X(0) = x) | x = 0, 1, …,}. This distribution has an essential cut‐off before x = L and the distribution of LX(0) may be considered a tolerance distribution. A multivariate extension appropriate for the randomized complete block design is developed by constructing dependence in the initial damage distributions. Our multivariate model is applied (via maximum likelihood) to litter‐matched tumorigenesis data for rats. The litter effect accounts for 5.9 percent of the variance of the individual effect. Cumulative damage hazard functions are compared to nonparametric hazard functions and to hazard functions obtained from the PVF‐Weibull frailty model. The cumulative damage model has greater dimensionality for interpretation compared to other models, owing principally to the intensity function part of the model.  相似文献   

20.
Background: With linked register and cause of death data becoming more accessible than ever, competing risks methodology is being increasingly used as a way of obtaining “real world” probabilities of death broken down by specific causes. It is important, in terms of the validity of these studies, to have accurate cause of death information. However, it is well documented that cause of death information taken from death certificates is often lacking in accuracy and completeness. Methods: We assess through use of a simulation study the effect of under and over-recording of cancer on death certificates in a competing risks analysis consisting of three competing causes of death: cancer, heart disease and other causes. Using realistic levels of misclassification, we consider 24 scenarios and examine the bias in the cause-specific hazard ratios and the cumulative incidence function. Results: The bias in the cumulative incidence function was highest in the oldest age group reaching values as high as 2.6 percentage units for the “good” cancer prognosis scenario and 9.7 percentage units for the “poor” prognosis scenario. Conclusion: The bias resulting from the chosen levels of misclassification in this study accentuate concerns that unreliable cause of death information may be providing misleading results. The results of this simulation study convey an important message to applied epidemiological researchers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号