首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Guo Y  Manatunga AK 《Biometrics》2007,63(1):164-172
Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. Lin's (1989, Biometrics 45, 255-268) concordance correlation coefficient (CCC) has become a popular measure of agreement for correlated continuous outcomes. However, commonly used estimation methods for the CCC do not accommodate censored observations and are, therefore, not applicable for survival outcomes. In this article, we estimate the CCC nonparametrically through the bivariate survival function. The proposed estimator of the CCC is proven to be strongly consistent and asymptotically normal, with a consistent bootstrap variance estimator. Furthermore, we propose a time-dependent agreement coefficient as an extension of Lin's (1989) CCC for measuring the agreement between survival times among subjects who survive beyond a specified time point. A nonparametric estimator is developed for the time-dependent agreement coefficient as well. It has the same asymptotic properties as the estimator of the CCC. Simulation studies are conducted to evaluate the performance of the proposed estimators. A real data example from a prostate cancer study is used to illustrate the method.  相似文献   

2.
Nonparametric estimation of the bivariate recurrence time distribution   总被引:2,自引:0,他引:2  
Huang CY  Wang MC 《Biometrics》2005,61(2):392-402
This article considers statistical models in which two different types of events, such as the diagnosis of a disease and the remission of the disease, occur alternately over time and are observed subject to right censoring. We propose nonparametric estimators for the joint distribution of bivariate recurrence times and the marginal distribution of the first recurrence time. In general, the marginal distribution of the second recurrence time cannot be estimated due to an identifiability problem, but a conditional distribution of the second recurrence time can be estimated non-parametrically. In the literature, statistical methods have been developed to estimate the joint distribution of bivariate recurrence times based on data on the first pair of censored bivariate recurrence times. These methods are inefficient in the model considered here because recurrence times of higher orders are not used. Asymptotic properties of the proposed estimators are established. Numerical studies demonstrate the estimators perform well with practical sample sizes. We apply the proposed method to the South Verona, Italy, psychiatric case register (PCR) data set for illustration of the methods and theory.  相似文献   

3.
In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.  相似文献   

4.
Time-dependent ROC curves for censored survival data and a diagnostic marker   总被引:13,自引:0,他引:13  
Heagerty PJ  Lumley T  Pepe MS 《Biometrics》2000,56(2):337-344
ROC curves are a popular method for displaying sensitivity and specificity of a continuous diagnostic marker, X, for a binary disease variable, D. However, many disease outcomes are time dependent, D(t), and ROC curves that vary as a function of time may be more appropriate. A common example of a time-dependent variable is vital status, where D(t) = 1 if a patient has died prior to time t and zero otherwise. We propose summarizing the discrimination potential of a marker X, measured at baseline (t = 0), by calculating ROC curves for cumulative disease or death incidence by time t, which we denote as ROC(t). A typical complexity with survival data is that observations may be censored. Two ROC curve estimators are proposed that can accommodate censored data. A simple estimator is based on using the Kaplan-Meier estimator for each possible subset X > c. However, this estimator does not guarantee the necessary condition that sensitivity and specificity are monotone in X. An alternative estimator that does guarantee monotonicity is based on a nearest neighbor estimator for the bivariate distribution function of (X, T), where T represents survival time (Akritas, M. J., 1994, Annals of Statistics 22, 1299-1327). We present an example where ROC(t) is used to compare a standard and a modified flow cytometry measurement for predicting survival after detection of breast cancer and an example where the ROC(t) curve displays the impact of modifying eligibility criteria for sample size and power in HIV prevention trials.  相似文献   

5.
Nam JM 《Biometrics》2003,59(4):1027-1035
When the intraclass correlation coefficient or the equivalent version of the kappa agreement coefficient have been estimated from several independent studies or from a stratified study, we have the problem of comparing the kappa statistics and combining the information regarding the kappa statistics in a common kappa when the assumption of homogeneity of kappa coefficients holds. In this article, using the likelihood score theory extended to nuisance parameters (Tarone, 1988, Communications in Statistics-Theory and Methods 17(5), 1549-1556) we present an efficient homogeneity test for comparing several independent kappa statistics and, also, give a modified homogeneity score method using a noniterative and consistent estimator as an alternative. We provide the sample size using the modified homogeneity score method and compare it with that using the goodness-of-fit method (GOF) (Donner, Eliasziw, and Klar, 1996, Biometrics 52, 176-183). A simulation study for small and moderate sample sizes showed that the actual level of the homogeneity score test using the maximum likelihood estimators (MLEs) of parameters is satisfactorily close to the nominal and it is smaller than those of the modified homogeneity score and the goodness-of-fit tests. We investigated statistical properties of several noniterative estimators of a common kappa. The estimator (Donner et al., 1996) is essentially efficient and can be used as an alternative to the iterative MLE. An efficient interval estimation of a common kappa using the likelihood score method is presented.  相似文献   

6.
S T Gross 《Biometrics》1986,42(4):883-893
Published results on the use of the kappa coefficient of agreement have traditionally been concerned with situations where a large number of subjects is classified by a small group of raters. The coefficient is then used to assess the degree of agreement among the raters through hypothesis testing or confidence intervals. A modified kappa coefficient of agreement for multiple categories is proposed and a parameter-free distribution for testing null agreement is provided, for use when the number of raters is large relative to the number of categories and subjects. The large-sample distribution of kappa is shown to be normal in the nonnull case, and confidence intervals for kappa are provided. The results are extended to allow for an unequal number of raters per subject.  相似文献   

7.
In longitudinal studies of disease, patients may experience several events through a follow‐up period. In these studies, the sequentially ordered events are often of interest and lead to problems that have received much attention recently. Issues of interest include the estimation of bivariate survival, marginal distributions, and the conditional distribution of gap times. In this work, we consider the estimation of the survival function conditional to a previous event. Different nonparametric approaches will be considered for estimating these quantities, all based on the Kaplan–Meier estimator of the survival function. We explore the finite sample behavior of the estimators through simulations. The different methods proposed in this article are applied to a dataset from a German Breast Cancer Study. The methods are used to obtain predictors for the conditional survival probabilities as well as to study the influence of recurrence in overall survival.  相似文献   

8.
In this article, we provide a method of estimation for the treatment effect in the adaptive design for censored survival data with or without adjusting for risk factors other than the treatment indicator. Within the semiparametric Cox proportional hazards model, we propose a bias-adjusted parameter estimator for the treatment coefficient and its asymptotic confidence interval at the end of the trial. The method for obtaining an asymptotic confidence interval and point estimator is based on a general distribution property of the final test statistic from the weighted linear rank statistics at the interims with or without considering the nuisance covariates. The computation of the estimates is straightforward. Extensive simulation studies show that the asymptotic confidence intervals have reasonable nominal probability of coverage, and the proposed point estimators are nearly unbiased with practical sample sizes.  相似文献   

9.
We present a method to fit a mixed effects Cox model with interval‐censored data. Our proposal is based on a multiple imputation approach that uses the truncated Weibull distribution to replace the interval‐censored data by imputed survival times and then uses established mixed effects Cox methods for right‐censored data. Interval‐censored data were encountered in a database corresponding to a recompilation of retrospective data from eight analytical treatment interruption (ATI) studies in 158 human immunodeficiency virus (HIV) positive combination antiretroviral treatment (cART) suppressed individuals. The main variable of interest is the time to viral rebound, which is defined as the increase of serum viral load (VL) to detectable levels in a patient with previously undetectable VL, as a consequence of the interruption of cART. Another aspect of interest of the analysis is to consider the fact that the data come from different studies based on different grounds and that we have several assessments on the same patient. In order to handle this extra variability, we frame the problem into a mixed effects Cox model that considers a random intercept per subject as well as correlated random intercept and slope for pre‐cART VL per study. Our procedure has been implemented in R using two packages: truncdist and coxme , and can be applied to any data set that presents both interval‐censored survival times and a grouped data structure that could be treated as a random effect in a regression model. The properties of the parameter estimators obtained with our proposed method are addressed through a simulation study.  相似文献   

10.
There has been a rising interest in better exploiting auxiliary summary information from large databases in the analysis of smaller-scale studies that collect more comprehensive patient-level information. The purpose of this paper is twofold: first, we propose a novel approach to synthesize information from both the aggregate summary statistics and the individual-level data in censored linear regression. We show that the auxiliary information amounts to a system of nonsmooth estimating equations and thus can be combined with the conventional weighted log-rank estimating equations by using the generalized method of moments (GMM) approach. The proposed methodology can be further extended to account for the potential inconsistency in information from different sources. Second, in the absence of auxiliary information, we propose to improve estimation efficiency by combining the overidentified weighted log-rank estimating equations with different weight functions via the GMM framework. To deal with the nonsmooth GMM-type objective functions, we develop an asymptotics-guided algorithm for parameter and variance estimation. We establish the asymptotic normality of the proposed GMM-type estimators. Simulation studies show that the proposed estimators can yield substantial efficiency gain over the conventional weighted log-rank estimators. The proposed methods are applied to a pancreatic cancer study for illustration.  相似文献   

11.
Weighted least-squares approach for comparing correlated kappa   总被引:3,自引:0,他引:3  
Barnhart HX  Williamson JM 《Biometrics》2002,58(4):1012-1019
In the medical sciences, studies are often designed to assess the agreement between different raters or different instruments. The kappa coefficient is a popular index of agreement for binary and categorical ratings. Here we focus on testing for the equality of two dependent kappa coefficients. We use the weighted least-squares (WLS) approach of Koch et al. (1977, Biometrics 33, 133-158) to take into account the correlation between the estimated kappa statistics. We demonstrate how the SAS PROC CATMOD can be used to test for the equality of dependent Cohen's kappa coefficients and dependent intraclass kappa coefficients with nominal categorical ratings. We also test for the equality of dependent Cohen's kappa and dependent weighted kappa with ordinal ratings. The major advantage of the WLS approach is that it allows the data analyst a way of testing dependent kappa with popular SAS software. The WLS approach can handle any number of categories. Analyses of three biomedical studies are used for illustration.  相似文献   

12.
Weighted kappa is defined as a measure of pairwise inter observer agreement. A weighted intra class kappa coefficient is proposed to measure agreement on a particular response category. An interclass kappa coefficient is proposed for each pair of response categories. Simple estimation procedures are presented for the case where the observers judging one subject are not necessarily the same as those judging another subject. Large sample standard errors are derived and a numerical example is given.  相似文献   

13.
Several simple, intuitive, explicitly computable estimation procedures for relative risk with interval‐censored data are proposed. These procedures are the modified versions of relative risk estimators for right‐censored data developed by Peto, Pike, and Andersen. Simulation study is conducted to demonstrate the performances of the proposed estimators.  相似文献   

14.
In observational studies of survival time featuring a binary time-dependent treatment, the hazard ratio (an instantaneous measure) is often used to represent the treatment effect. However, investigators are often more interested in the difference in survival functions. We propose semiparametric methods to estimate the causal effect of treatment among the treated with respect to survival probability. The objective is to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. For each patient, we compute a prognostic score (based on the pre-treatment death hazard) and a propensity score (based on the treatment hazard). Each treated patient is then matched with an alive, uncensored and not-yet-treated patient with similar prognostic and/or propensity scores. The experience of each treated and matched patient is weighted using a variant of Inverse Probability of Censoring Weighting to account for the impact of censoring. We propose estimators of the treatment-specific survival functions (and their difference), computed through weighted Nelson–Aalen estimators. Closed-form variance estimators are proposed which take into consideration the potential replication of subjects across matched sets. The proposed methods are evaluated through simulation, then applied to estimate the effect of kidney transplantation on survival among end-stage renal disease patients using data from a national organ failure registry.  相似文献   

15.
Cong XJ  Yin G  Shen Y 《Biometrics》2007,63(3):663-672
We consider modeling correlated survival data when cluster sizes may be informative to the outcome of interest based on a within-cluster resampling (WCR) approach and a weighted score function (WSF) method. We derive the large sample properties for the WCR estimators under the Cox proportional hazards model. We establish consistency and asymptotic normality of the regression coefficient estimators, and the weak convergence property of the estimated baseline cumulative hazard function. The WSF method is to incorporate the inverse of cluster sizes as weights in the score function. We conduct simulation studies to assess and compare the finite-sample behaviors of the estimators and apply the proposed methods to a dental study as an illustration.  相似文献   

16.
The concordance correlation coefficient (CCC) and the probability of agreement (PA) are two frequently used measures for evaluating the degree of agreement between measurements generated by two different methods. In this paper, we consider the CCC and the PA using the bivariate normal distribution for modeling the observations obtained by two measurement methods. The main aim of this paper is to develop diagnostic tools for the detection of those observations that are influential on the maximum likelihood estimators of the CCC and the PA using the local influence methodology but not based on the likelihood displacement. Thus, we derive first‐ and second‐order measures considering the case‐weight perturbation scheme. The proposed methodology is illustrated through a Monte Carlo simulation study and using a dataset from a clinical study on transient sleep disorder. Empirical results suggest that under certain circumstances first‐order local influence measures may be more powerful than second‐order measures for the detection of influential observations.  相似文献   

17.
In survival analysis with censored data the mean squared error of prediction can be estimated by weighted averages of time-dependent residuals. Graf et al. (1999) suggested a robust weighting scheme based on the assumption that the censoring mechanism is independent of the covariates. We show consistency of the estimator. Furthermore, we show that a modified version of this estimator is consistent even when censoring and event times are only conditionally independent given the covariates. The modified estimators are derived on the basis of regression models for the censoring distribution. A simulation study and a real data example illustrate the results.  相似文献   

18.
Agreement coefficients quantify how well a set of instruments agree in measuring some response on a population of interest. Many standard agreement coefficients (e.g. kappa for nominal, weighted kappa for ordinal, and the concordance correlation coefficient (CCC) for continuous responses) may indicate increasing agreement as the marginal distributions of the two instruments become more different even as the true cost of disagreement stays the same or increases. This problem has been described for the kappa coefficients; here we describe it for the CCC. We propose a solution for all types of responses in the form of random marginal agreement coefficients (RMACs), which use a different adjustment for chance than the standard agreement coefficients. Standard agreement coefficients model chance agreement using expected agreement between two independent random variables each distributed according to the marginal distribution of one of the instruments. RMACs adjust for chance by modeling two independent readings both from the mixture distribution that averages the two marginal distributions. In other words, both independent readings represent first a random choice of instrument, then a random draw from the marginal distribution of the chosen instrument. The advantage of the resulting RMAC is that differences between the two marginal distributions will not induce greater apparent agreement. As with the standard agreement coefficients, the RMACs do not require any assumptions about the bivariate distribution of the random variables associated with the two instruments. We describe the RMAC for nominal, ordinal and continuous data, and show through the delta method how to approximate the variances of some important special cases.  相似文献   

19.
Summary The median failure time is often utilized to summarize survival data because it has a more straightforward interpretation for investigators in practice than the popular hazard function. However, existing methods for comparing median failure times for censored survival data either require estimation of the probability density function or involve complicated formulas to calculate the variance of the estimates. In this article, we modify a K ‐sample median test for censored survival data ( Brookmeyer and Crowley, 1982 , Journal of the American Statistical Association 77, 433–440) through a simple contingency table approach where each cell counts the number of observations in each sample that are greater than the pooled median or vice versa. Under censoring, this approach would generate noninteger entries for the cells in the contingency table. We propose to construct a weighted asymptotic test statistic that aggregates dependent χ2 ‐statistics formed at the nearest integer points to the original noninteger entries. We show that this statistic follows approximately a χ2 ‐distribution with k? 1 degrees of freedom. For a small sample case, we propose a test statistic based on combined p ‐values from Fisher’s exact tests, which follows a χ2 ‐distribution with 2 degrees of freedom. Simulation studies are performed to show that the proposed method provides reasonable type I error probabilities and powers. The proposed method is illustrated with two real datasets from phase III breast cancer clinical trials.  相似文献   

20.
Lu W  Li L 《Biometrics》2011,67(2):513-523
Methodology of sufficient dimension reduction (SDR) has offered an effective means to facilitate regression analysis of high-dimensional data. When the response is censored, however, most existing SDR estimators cannot be applied, or require some restrictive conditions. In this article, we propose a new class of inverse censoring probability weighted SDR estimators for censored regressions. Moreover, regularization is introduced to achieve simultaneous variable selection and dimension reduction. Asymptotic properties and empirical performance of the proposed methods are examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号