首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Guo Y  Manatunga AK 《Biometrics》2009,65(1):125-134
Summary .  Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.  相似文献   

2.
Clegg LX  Cai J  Sen PK 《Biometrics》1999,55(3):805-812
In multivariate failure time data analysis, a marginal regression modeling approach is often preferred to avoid assumptions on the dependence structure among correlated failure times. In this paper, a marginal mixed baseline hazards model is introduced. Estimating equations are proposed for the estimation of the marginal hazard ratio parameters. The proposed estimators are shown to be consistent and asymptotically Gaussian with a robust covariance matrix that can be consistently estimated. Simulation studies indicate the adequacy of the proposed methodology for practical sample sizes. The methodology is illustrated with a data set from the Framingham Heart Study.  相似文献   

3.
Ning J  Qin J  Shen Y 《Biometrics》2011,67(4):1369-1378
We present a natural generalization of the Buckley-James-type estimator for traditional survival data to right-censored length-biased data under the accelerated failure time (AFT) model. Length-biased data are often encountered in prevalent cohort studies and cancer screening trials. Informative right censoring induced by length-biased sampling creates additional challenges in modeling the effects of risk factors on the unbiased failure times for the target population. In this article, we evaluate covariate effects on the failure times of the target population under the AFT model given the observed length-biased data. We construct a Buckley-James-type estimating equation, develop an iterative computing algorithm, and establish the asymptotic properties of the estimators. We assess the finite-sample properties of the proposed estimators against the estimators obtained from the existing methods. Data from a prevalent cohort study of patients with dementia are used to illustrate the proposed methodology.  相似文献   

4.
Semiparametric models for cumulative incidence functions   总被引:1,自引:0,他引:1  
Bryant J  Dignam JJ 《Biometrics》2004,60(1):182-190
In analyses of time-to-failure data with competing risks, cumulative incidence functions may be used to estimate the time-dependent cumulative probability of failure due to specific causes. These functions are commonly estimated using nonparametric methods, but in cases where events due to the cause of primary interest are infrequent relative to other modes of failure, nonparametric methods may result in rather imprecise estimates for the corresponding subdistribution. In such cases, it may be possible to model the cause-specific hazard of primary interest parametrically, while accounting for the other modes of failure using nonparametric estimators. The cumulative incidence estimators so obtained are simple to compute and are considerably more efficient than the usual nonparametric estimator, particularly with regard to interpolation of cumulative incidence at early or intermediate time points within the range of data used to fit the function. More surprisingly, they are often nearly as efficient as fully parametric estimators. We illustrate the utility of this approach in the analysis of patients treated for early stage breast cancer.  相似文献   

5.
Existing cure‐rate survival models are generally not convenient for modeling and estimating the survival quantiles of a patient with specified covariate values. This paper proposes a novel class of cure‐rate model, the transform‐both‐sides cure‐rate model (TBSCRM), that can be used to make inferences about both the cure‐rate and the survival quantiles. We develop the Bayesian inference about the covariate effects on the cure‐rate as well as on the survival quantiles via Markov Chain Monte Carlo (MCMC) tools. We also show that the TBSCRM‐based Bayesian method outperforms existing cure‐rate models based methods in our simulation studies and in application to the breast cancer survival data from the National Cancer Institute's Surveillance, Epidemiology, and End Results (SEER) database.  相似文献   

6.
Censored quantile regression models, which offer great flexibility in assessing covariate effects on event times, have attracted considerable research interest. In this study, we consider flexible estimation and inference procedures for competing risks quantile regression, which not only provides meaningful interpretations by using cumulative incidence quantiles but also extends the conventional accelerated failure time model by relaxing some of the stringent model assumptions, such as global linearity and unconditional independence. Current method for censored quantile regressions often involves the minimization of the L1‐type convex function or solving the nonsmoothed estimating equations. This approach could lead to multiple roots in practical settings, particularly with multiple covariates. Moreover, variance estimation involves an unknown error distribution and most methods rely on computationally intensive resampling techniques such as bootstrapping. We consider the induced smoothing procedure for censored quantile regressions to the competing risks setting. The proposed procedure permits the fast and accurate computation of quantile regression parameter estimates and standard variances by using conventional numerical methods such as the Newton–Raphson algorithm. Numerical studies show that the proposed estimators perform well and the resulting inference is reliable in practical settings. The method is finally applied to data from a soft tissue sarcoma study.  相似文献   

7.
In observational studies of survival time featuring a binary time-dependent treatment, the hazard ratio (an instantaneous measure) is often used to represent the treatment effect. However, investigators are often more interested in the difference in survival functions. We propose semiparametric methods to estimate the causal effect of treatment among the treated with respect to survival probability. The objective is to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. For each patient, we compute a prognostic score (based on the pre-treatment death hazard) and a propensity score (based on the treatment hazard). Each treated patient is then matched with an alive, uncensored and not-yet-treated patient with similar prognostic and/or propensity scores. The experience of each treated and matched patient is weighted using a variant of Inverse Probability of Censoring Weighting to account for the impact of censoring. We propose estimators of the treatment-specific survival functions (and their difference), computed through weighted Nelson–Aalen estimators. Closed-form variance estimators are proposed which take into consideration the potential replication of subjects across matched sets. The proposed methods are evaluated through simulation, then applied to estimate the effect of kidney transplantation on survival among end-stage renal disease patients using data from a national organ failure registry.  相似文献   

8.
L. Xue  L. Wang  A. Qu 《Biometrics》2010,66(2):393-404
Summary We propose a new estimation method for multivariate failure time data using the quadratic inference function (QIF) approach. The proposed method efficiently incorporates within‐cluster correlations. Therefore, it is more efficient than those that ignore within‐cluster correlation. Furthermore, the proposed method is easy to implement. Unlike the weighted estimating equations in Cai and Prentice (1995, Biometrika 82 , 151–164), it is not necessary to explicitly estimate the correlation parameters. This simplification is particularly useful in analyzing data with large cluster size where it is difficult to estimate intracluster correlation. Under certain regularity conditions, we show the consistency and asymptotic normality of the proposed QIF estimators. A chi‐squared test is also developed for hypothesis testing. We conduct extensive Monte Carlo simulation studies to assess the finite sample performance of the proposed methods. We also illustrate the proposed methods by analyzing primary biliary cirrhosis (PBC) data.  相似文献   

9.
Yin G 《Biometrics》2005,61(2):552-558
Due to natural or artificial clustering, multivariate survival data often arise in biomedical studies, for example, a dental study involving multiple teeth from each subject. A certain proportion of subjects in the population who are not expected to experience the event of interest are considered to be "cured" or insusceptible. To model correlated or clustered failure time data incorporating a surviving fraction, we propose two forms of cure rate frailty models. One model naturally introduces frailty based on biological considerations while the other is motivated from the Cox proportional hazards frailty model. We formulate the likelihood functions based on piecewise constant hazards and derive the full conditional distributions for Gibbs sampling in the Bayesian paradigm. As opposed to the Cox frailty model, the proposed methods demonstrate great potential in modeling multivariate survival data with a cure fraction. We illustrate the cure rate frailty models with a root canal therapy data set.  相似文献   

10.
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation‐extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error‐prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.  相似文献   

11.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   

12.
This paper discusses two sample nonparametric comparison of survival functions when only interval‐censored failure time data are available. The problem considered often occurs in, for example, biological and medical studies such as medical follow‐up studies and clinical trials. For the problem, we present and study several nonparametric test procedures that include methods based on both absolute and squared survival differences as well as simple survival differences. The presented tests provide alternatives to existing methods, most of which are rank‐based tests and not sensitive to nonproportional or nonmonotone alternatives. Simulation studies are performed to evaluate and compare the proposed methods with existing methods and suggest that the proposed tests work well for nonmonotone alternatives as well as monotone alternatives. An illustrative example is presented.  相似文献   

13.
Multivariate recurrent event data are usually encountered in many clinical and longitudinal studies in which each study subject may experience multiple recurrent events. For the analysis of such data, most existing approaches have been proposed under the assumption that the censoring times are noninformative, which may not be true especially when the observation of recurrent events is terminated by a failure event. In this article, we consider regression analysis of multivariate recurrent event data with both time‐dependent and time‐independent covariates where the censoring times and the recurrent event process are allowed to be correlated via a frailty. The proposed joint model is flexible where both the distributions of censoring and frailty variables are left unspecified. We propose a pairwise pseudolikelihood approach and an estimating equation‐based approach for estimating coefficients of time‐dependent and time‐independent covariates, respectively. The large sample properties of the proposed estimates are established, while the finite‐sample properties are demonstrated by simulation studies. The proposed methods are applied to the analysis of a set of bivariate recurrent event data from a study of platelet transfusion reactions.  相似文献   

14.
Jing Qin  Yu Shen 《Biometrics》2010,66(2):382-392
Summary Length‐biased time‐to‐event data are commonly encountered in applications ranging from epidemiological cohort studies or cancer prevention trials to studies of labor economy. A longstanding statistical problem is how to assess the association of risk factors with survival in the target population given the observed length‐biased data. In this article, we demonstrate how to estimate these effects under the semiparametric Cox proportional hazards model. The structure of the Cox model is changed under length‐biased sampling in general. Although the existing partial likelihood approach for left‐truncated data can be used to estimate covariate effects, it may not be efficient for analyzing length‐biased data. We propose two estimating equation approaches for estimating the covariate coefficients under the Cox model. We use the modern stochastic process and martingale theory to develop the asymptotic properties of the estimators. We evaluate the empirical performance and efficiency of the two methods through extensive simulation studies. We use data from a dementia study to illustrate the proposed methodology, and demonstrate the computational algorithms for point estimates, which can be directly linked to the existing functions in S‐PLUS or R .  相似文献   

15.
Chang SH 《Biometrics》2000,56(1):183-189
A longitudinal study is conducted to compare the process of particular disease between two groups. The process of the disease is monitored according to which of several ordered events occur. In the paper, the sojourn time between two successive events is considered as the outcome of interest. The group effects on the sojourn times of the multiple events are parameterized by scale changes in a semiparametric accelerated failure time model where the dependence structure among the multivariate sojourn times is unspecified. Suppose that the sojourn times are subject to dependent censoring and the censoring times are observed for all subjects. A log-rank-type estimating approach by rescaling the sojourn times and the dependent censoring times into the same distribution is constructed to estimate the group effects and the corresponding estimators are consistent and asymptotically normal. Without the dependent censoring, the independent censoring times in general are not available for the uncensored data. In order to complete the censoring information, pseudo-censoring times are generated from the corresponding nonparametrically estimated survival function in each group, and we can still obtained unbiased estimating functions for the group effects. A real application and a simulation study are conducted to illustrate the proposed methods.  相似文献   

16.
Statistical analysis of longitudinal data often involves modeling treatment effects on clinically relevant longitudinal biomarkers since an initial event (the time origin). In some studies including preventive HIV vaccine efficacy trials, some participants have biomarkers measured starting at the time origin, whereas others have biomarkers measured starting later with the time origin unknown. The semiparametric additive time-varying coefficient model is investigated where the effects of some covariates vary nonparametrically with time while the effects of others remain constant. Weighted profile least squares estimators coupled with kernel smoothing are developed. The method uses the expectation maximization approach to deal with the censored time origin. The Kaplan–Meier estimator and other failure time regression models such as the Cox model can be utilized to estimate the distribution and the conditional distribution of left censored event time related to the censored time origin. Asymptotic properties of the parametric and nonparametric estimators and consistent asymptotic variance estimators are derived. A two-stage estimation procedure for choosing weight is proposed to improve estimation efficiency. Numerical simulations are conducted to examine finite sample properties of the proposed estimators. The simulation results show that the theory and methods work well. The efficiency gain of the two-stage estimation procedure depends on the distribution of the longitudinal error processes. The method is applied to analyze data from the Merck 023/HVTN 502 Step HIV vaccine study.  相似文献   

17.
Guo Y  Manatunga AK 《Biometrics》2007,63(1):164-172
Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. Lin's (1989, Biometrics 45, 255-268) concordance correlation coefficient (CCC) has become a popular measure of agreement for correlated continuous outcomes. However, commonly used estimation methods for the CCC do not accommodate censored observations and are, therefore, not applicable for survival outcomes. In this article, we estimate the CCC nonparametrically through the bivariate survival function. The proposed estimator of the CCC is proven to be strongly consistent and asymptotically normal, with a consistent bootstrap variance estimator. Furthermore, we propose a time-dependent agreement coefficient as an extension of Lin's (1989) CCC for measuring the agreement between survival times among subjects who survive beyond a specified time point. A nonparametric estimator is developed for the time-dependent agreement coefficient as well. It has the same asymptotic properties as the estimator of the CCC. Simulation studies are conducted to evaluate the performance of the proposed estimators. A real data example from a prostate cancer study is used to illustrate the method.  相似文献   

18.
Liang Y  Lu W  Ying Z 《Biometrics》2009,65(2):377-384
Summary .  In analysis of longitudinal data, it is often assumed that observation times are predetermined and are the same across study subjects. Such an assumption, however, is often violated in practice. As a result, the observation times may be highly irregular. It is well known that if the sampling scheme is correlated with the outcome values, the usual statistical analysis may yield bias. In this article, we propose joint modeling and analysis of longitudinal data with possibly informative observation times via latent variables. A two-step estimation procedure is developed for parameter estimation. We show that the resulting estimators are consistent and asymptotically normal, and that the asymptotic variance can be consistently estimated using the bootstrap method. Simulation studies and a real data analysis demonstrate that our method performs well with realistic sample sizes and is appropriate for practical use.  相似文献   

19.
Sufficient dimension reduction (SDR) that effectively reduces the predictor dimension in regression has been popular in high‐dimensional data analysis. Under the presence of censoring, however, most existing SDR methods suffer. In this article, we propose a new algorithm to perform SDR with censored responses based on the quantile‐slicing scheme recently proposed by Kim et al. First, we estimate the conditional quantile function of the true survival time via the censored kernel quantile regression (Shin et al.) and then slice the data based on the estimated censored regression quantiles instead of the responses. Both simulated and real data analysis demonstrate promising performance of the proposed method.  相似文献   

20.
Weibin Zhong  Guoqing Diao 《Biometrics》2023,79(3):1959-1971
Two-phase studies such as case-cohort and nested case-control studies are widely used cost-effective sampling strategies. In the first phase, the observed failure/censoring time and inexpensive exposures are collected. In the second phase, a subgroup of subjects is selected for measurements of expensive exposures based on the information from the first phase. One challenging issue is how to utilize all the available information to conduct efficient regression analyses of the two-phase study data. This paper proposes a joint semiparametric modeling of the survival outcome and the expensive exposures. Specifically, we assume a class of semiparametric transformation models and a semiparametric density ratio model for the survival outcome and the expensive exposures, respectively. The class of semiparametric transformation models includes the proportional hazards model and the proportional odds model as special cases. The density ratio model is flexible in modeling multivariate mixed-type data. We develop efficient likelihood-based estimation and inference procedures and establish the large sample properties of the nonparametric maximum likelihood estimators. Extensive numerical studies reveal that the proposed methods perform well under practical settings. The proposed methods also appear to be reasonably robust under various model mis-specifications. An application to the National Wilms Tumor Study is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号