首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary The standard estimator for the cause‐specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is a weighted empirical cumulative distribution function and the other a product‐limit estimator. This equivalence suggests an alternative view of the analysis of time‐to‐event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause‐specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause‐specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time‐dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non‐AIDS related cumulative mortality.  相似文献   

2.
Cong XJ  Yin G  Shen Y 《Biometrics》2007,63(3):663-672
We consider modeling correlated survival data when cluster sizes may be informative to the outcome of interest based on a within-cluster resampling (WCR) approach and a weighted score function (WSF) method. We derive the large sample properties for the WCR estimators under the Cox proportional hazards model. We establish consistency and asymptotic normality of the regression coefficient estimators, and the weak convergence property of the estimated baseline cumulative hazard function. The WSF method is to incorporate the inverse of cluster sizes as weights in the score function. We conduct simulation studies to assess and compare the finite-sample behaviors of the estimators and apply the proposed methods to a dental study as an illustration.  相似文献   

3.
This article develops omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. Confidence bands for the difference and the ratio of two conditional cumulative incidence functions are also constructed. The omnibus test is formulated in terms of a test process given by a weighted difference of estimates of cumulative cause-specific hazard rates under Cox proportional hazards models. A simulation procedure is devised for sampling from the null distribution of the test process, leading to graphical and numerical technques for detecting significant differences in the risks. The approach is applied to a cohort study of type-specific HIV infection rates.  相似文献   

4.
Grigoletto M  Akritas MG 《Biometrics》1999,55(4):1177-1187
We propose a method for fitting semiparametric models such as the proportional hazards (PH), additive risks (AR), and proportional odds (PO) models. Each of these semiparametric models implies that some transformation of the conditional cumulative hazard function (at each t) depends linearly on the covariates. The proposed method is based on nonparametric estimation of the conditional cumulative hazard function, forming a weighted average over a range of t-values, and subsequent use of least squares to estimate the parameters suggested by each model. An approximation to the optimal weight function is given. This allows semiparametric models to be fitted even in incomplete data cases where the partial likelihood fails (e.g., left censoring, right truncation). However, the main advantage of this method rests in the fact that neither the interpretation of the parameters nor the validity of the analysis depend on the appropriateness of the PH or any of the other semiparametric models. In fact, we propose an integrated method for data analysis where the role of the various semiparametric models is to suggest the best fitting transformation. A single continuous covariate and several categorical covariates (factors) are allowed. Simulation studies indicate that the test statistics and confidence intervals have good small-sample performance. A real data set is analyzed.  相似文献   

5.
Godwin Yung  Yi Liu 《Biometrics》2020,76(3):939-950
Asymptotic distributions under alternative hypotheses and their corresponding sample size and power equations are derived for nonparametric test statistics commonly used to compare two survival curves. Test statistics include the weighted log-rank test and the Wald test for difference in (or ratio of) Kaplan-Meier survival probability, percentile survival, and restricted mean survival time. Accrual, survival, and loss to follow-up are allowed to follow any arbitrary continuous distribution. We show that Schoenfeld's equation—often used by practitioners to calculate the required number of events for the unweighted log-rank test—can be inaccurate even when the proportional hazards (PH) assumption holds. In fact, it can mislead one to believe that 1:1 is the optimal randomization ratio (RR), when actually power can be gained by assigning more patients to the active arm. Meaningful improvements to Schoenfeld's equation are made. The present theory should be useful in designing clinical trials, particularly in immuno-oncology where nonproportional hazards are frequently encountered. We illustrate the application of our theory with an example exploring optimal RR under PH and a second example examining the impact of delayed treatment effect. A companion R package npsurvSS is available for download on CRAN.  相似文献   

6.
For sample size calculation in clinical trials with survival endpoints, the logrank test, which is the optimal method under the proportional hazard (PH) assumption, is predominantly used. In reality, the PH assumption may not hold. For example, in immuno-oncology trials, delayed treatment effects are often expected. The sample size without considering the potential violation of the PH assumption may lead to an underpowered study. In recent years, combination tests such as the maximum weighted logrank test have received great attention because of their robust performance in various hazards scenarios. In this paper, we propose a flexible simulation-free procedure to calculate the sample size using combination tests. The procedure extends the Lakatos' Markov model and allows for complex situations encountered in a clinical trial, like staggered entry, dropouts, etc. We evaluate the procedure using two maximum weighted logrank tests, one projection-type test, and three other commonly used tests under various hazards scenarios. The simulation studies show that the proposed method can achieve the target power for all compared tests in most scenarios. The combination tests exhibit robust performance under correct specification and misspecification scenarios and are highly recommended when the hazard-changing patterns are unknown beforehand. Finally, we demonstrate our method using two clinical trial examples and provide suggestions about the sample size calculations under nonproportional hazards.  相似文献   

7.
We discuss causal mediation analyses for survival data and propose a new approach based on the additive hazards model. The emphasis is on a dynamic point of view, that is, understanding how the direct and indirect effects develop over time. Hence, importantly, we allow for a time varying mediator. To define direct and indirect effects in such a longitudinal survival setting we take an interventional approach (Didelez, 2018) where treatment is separated into one aspect affecting the mediator and a different aspect affecting survival. In general, this leads to a version of the nonparametric g-formula (Robins, 1986). In the present paper, we demonstrate that combining the g-formula with the additive hazards model and a sequential linear model for the mediator process results in simple and interpretable expressions for direct and indirect effects in terms of relative survival as well as cumulative hazards. Our results generalize and formalize the method of dynamic path analysis (Fosen, Ferkingstad, Borgan, & Aalen, 2006; Strohmaier et al., 2015). An application to data from a clinical trial on blood pressure medication is given.  相似文献   

8.
In this article, we provide a method of estimation for the treatment effect in the adaptive design for censored survival data with or without adjusting for risk factors other than the treatment indicator. Within the semiparametric Cox proportional hazards model, we propose a bias-adjusted parameter estimator for the treatment coefficient and its asymptotic confidence interval at the end of the trial. The method for obtaining an asymptotic confidence interval and point estimator is based on a general distribution property of the final test statistic from the weighted linear rank statistics at the interims with or without considering the nuisance covariates. The computation of the estimates is straightforward. Extensive simulation studies show that the asymptotic confidence intervals have reasonable nominal probability of coverage, and the proposed point estimators are nearly unbiased with practical sample sizes.  相似文献   

9.
The conventional nonparametric tests in survival analysis, such as the log‐rank test, assess the null hypothesis that the hazards are equal at all times. However, hazards are hard to interpret causally, and other null hypotheses are more relevant in many scenarios with survival outcomes. To allow for a wider range of null hypotheses, we present a generic approach to define test statistics. This approach utilizes the fact that a wide range of common parameters in survival analysis can be expressed as solutions of differential equations. Thereby, we can test hypotheses based on survival parameters that solve differential equations driven by cumulative hazards, and it is easy to implement the tests on a computer. We present simulations, suggesting that our tests perform well for several hypotheses in a range of scenarios. As an illustration, we apply our tests to evaluate the effect of adjuvant chemotherapies in patients with colon cancer, using data from a randomized controlled trial.  相似文献   

10.
Sangbum Choi  Xuelin Huang 《Biometrics》2012,68(4):1126-1135
Summary We propose a semiparametrically efficient estimation of a broad class of transformation regression models for nonproportional hazards data. Classical transformation models are to be viewed from a frailty model paradigm, and the proposed method provides a unified approach that is valid for both continuous and discrete frailty models. The proposed models are shown to be flexible enough to model long‐term follow‐up survival data when the treatment effect diminishes over time, a case for which the PH or proportional odds assumption is violated, or a situation in which a substantial proportion of patients remains cured after treatment. Estimation of the link parameter in frailty distribution, considered to be unknown and possibly dependent on a time‐independent covariates, is automatically included in the proposed methods. The observed information matrix is computed to evaluate the variances of all the parameter estimates. Our likelihood‐based approach provides a natural way to construct simple statistics for testing the PH and proportional odds assumptions for usual survival data or testing the short‐ and long‐term effects for survival data with a cure fraction. Simulation studies demonstrate that the proposed inference procedures perform well in realistic settings. Applications to two medical studies are provided.  相似文献   

11.
Summary .  In this article, we consider the setting where the event of interest can occur repeatedly for the same subject (i.e., a recurrent event; e.g., hospitalization) and may be stopped permanently by a terminating event (e.g., death). Among the different ways to model recurrent/terminal event data, the marginal mean (i.e., averaging over the survival distribution) is of primary interest from a public health or health economics perspective. Often, the difference between treatment-specific recurrent event means will not be constant over time, particularly when treatment-specific differences in survival exist. In such cases, it makes more sense to quantify treatment effect based on the cumulative difference in the recurrent event means, as opposed to the instantaneous difference in the rates. We propose a method that compares treatments by separately estimating the survival probabilities and recurrent event rates given survival, then integrating to get the mean number of events. The proposed method combines an additive model for the conditional recurrent event rate and a proportional hazards model for the terminating event hazard. The treatment effects on survival and on recurrent event rate among survivors are estimated in constructing our measure and explain the mechanism generating the difference under study. The example that motivates this research is the repeated occurrence of hospitalization among kidney transplant recipients, where the effect of expanded criteria donor (ECD) compared to non-ECD kidney transplantation on the mean number of hospitalizations is of interest.  相似文献   

12.
Analysis of cumulative incidence (sometimes called absolute risk or crude risk) can be difficult if the cause of failure is missing for some subjects. Assuming missingness is random conditional on the observed data, we develop asymptotic theory for multiple imputation methods to estimate cumulative incidence. Covariates affect cause-specific hazards in our model, and we assume that separate proportional hazards models hold for each cause-specific hazard. Simulation studies show that procedures based on asymptotic theory have near nominal operating characteristics in cohorts of 200 and 400 subjects, both for cumulative incidence and for prediction error. The methods are illustrated with data on survival after breast cancer, obtained from the National Surgical Adjuvant Breast and Bowel Project (NSABP).  相似文献   

13.
A comparison is made between two approaches to testing goodness of fit of Cox's regression model for survival data. The first approach is based on the inclusion of time dependent covariates, whereas the second one is based on the autocovariance of successive contributions to the derivative of the loglikelihood. It appears that the second test is most appropriate for testing in situations where the structure of the departure from proportional hazards is not known a priori. An approximate expression for the relative efficiency of the two test procedures is presented.  相似文献   

14.
We propose a constrained maximum partial likelihood estimator for dimension reduction in integrative (e.g., pan-cancer) survival analysis with high-dimensional predictors. We assume that for each population in the study, the hazard function follows a distinct Cox proportional hazards model. To borrow information across populations, we assume that each of the hazard functions depend only on a small number of linear combinations of the predictors (i.e., “factors”). We estimate these linear combinations using an algorithm based on “distance-to-set” penalties. This allows us to impose both low-rankness and sparsity on the regression coefficient matrix estimator. We derive asymptotic results that reveal that our estimator is more efficient than fitting a separate proportional hazards model for each population. Numerical experiments suggest that our method outperforms competitors under various data generating models. We use our method to perform a pan-cancer survival analysis relating protein expression to survival across 18 distinct cancer types. Our approach identifies six linear combinations, depending on only 20 proteins, which explain survival across the cancer types. Finally, to validate our fitted model, we show that our estimated factors can lead to better prediction than competitors on four external datasets.  相似文献   

15.
Feng  Wentao; Wahed  Abdus S. 《Biometrika》2008,95(3):695-707
In two-stage adaptive treatment strategies, patients receivean induction treatment followed by a maintenance therapy, giventhat the patient responded to the induction treatment they received.To test for a difference in the effects of different inductionand maintenance treatment combinations, a modified supremumweighted log-rank test is proposed. The test is applied to adataset from a two-stage randomized trial and the results arecompared to those obtained using a standard weighted log-ranktest. A sample-size formula is proposed based on the limitingdistribution of the supremum weighted log-rank statistic. Thesample-size formula reduces to Eng and Kosorok's sample-sizeformula for a two-sample supremum log-rank test when there isno second randomization. Monte Carlo studies show that the proposedtest provides sample sizes that are close to those obtainedby standard weighted log-rank test under a proportional hazardsalternative. However, the proposed test is more powerful thanthe standard weighted log-rank test under non-proportional hazardsalternatives.  相似文献   

16.
Klein JP  Andersen PK 《Biometrics》2005,61(1):223-229
Typically, regression models for competing risks outcomes are based on proportional hazards models for the crude hazard rates. These estimates often do not agree with impressions drawn from plots of cumulative incidence functions for each level of a risk factor. We present a technique which models the cumulative incidence functions directly. The method is based on the pseudovalues from a jackknife statistic constructed from the cumulative incidence curve. These pseudovalues are used in a generalized estimating equation to obtain estimates of model parameters. We study the properties of this estimator and apply the technique to a study of the effect of alternative donors on relapse for patients given a bone marrow transplant for leukemia.  相似文献   

17.
M S Pepe  T R Fleming 《Biometrics》1989,45(2):497-507
A class of statistics based on the integrated weighted difference in Kaplan-Meier estimators is introduced for the two-sample censored data problem. With positive weight functions these statistics are intuitive for and sensitive against the alternative of stochastic ordering. The standard weighted log-rank statistics are not always sensitive against this alternative, particularly if the hazard functions cross. Qualitative comparisons are made between the weighted log-rank statistics and these weighted Kaplan-Meier (WKM) statistics. A statement of null asymptotic distribution theory is given and the choice of weight function is discussed in some detail. Results from small-sample simulation studies indicate that these statistics compare favorably with the log-rank procedure even under the proportional hazards alternative, and may perform better than it under the crossing hazards alternative.  相似文献   

18.
Cook AJ  Gold DR  Li Y 《Biometrics》2007,63(2):540-549
While numerous methods have been proposed to test for spatial cluster detection, in particular for discrete outcome data (e.g., disease incidence), few have been available for continuous data that are subject to censoring. This article provides an extension of the spatial scan statistic (Kulldorff, 1997, Communications in Statistics 26, 1481-1496) for censored outcome data and further proposes a simple spatial cluster detection method by utilizing cumulative martingale residuals within the framework of the Cox's proportional hazards models. Simulations have indicated good performance of the proposed methods, with the practical applicability illustrated by an ongoing epidemiology study which investigates the relationship of environmental exposures to asthma, allergic rhinitis/hayfever, and eczema.  相似文献   

19.
Sun L  Kim YJ  Sun J 《Biometrics》2004,60(3):637-643
Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics 49, 13-22; Sun et al., 1999, Biometrics 55, 909-914; Pan, 2001, Biometrics 57, 1245-1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study.  相似文献   

20.
Separate Cox analyses of all cause-specific hazards are the standard technique of choice to study the effect of a covariate in competing risks, but a synopsis of these results in terms of cumulative event probabilities is challenging. This difficulty has led to the development of the proportional subdistribution hazards model. If the covariate is known at baseline, the model allows for a summarizing assessment in terms of the cumulative incidence function. black Mathematically, the model also allows for including random time-dependent covariates, but practical implementation has remained unclear due to a certain risk set peculiarity. We use the intimate relationship of discrete covariates and multistate models to naturally treat time-dependent covariates within the subdistribution hazards framework. The methodology then straightforwardly translates to real-valued time-dependent covariates. As with classical survival analysis, including time-dependent covariates does not result in a model for probability functions anymore. Nevertheless, the proposed methodology provides a useful synthesis of separate cause-specific hazards analyses. We illustrate this with hospital infection data, where time-dependent covariates and competing risks are essential to the subject research question.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号