首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary We propose a Bayesian chi‐squared model diagnostic for analysis of data subject to censoring. The test statistic has the form of Pearson's chi‐squared test statistic and is easy to calculate from standard output of Markov chain Monte Carlo algorithms. The key innovation of this diagnostic is that it is based only on observed failure times. Because it does not rely on the imputation of failure times for observations that have been censored, we show that under heavy censoring it can have higher power for detecting model departures than a comparable test based on the complete data. In a simulation study, we show that tests based on this diagnostic exhibit comparable power and better nominal Type I error rates than a commonly used alternative test proposed by Akritas (1988, Journal of the American Statistical Association 83, 222–230). An important advantage of the proposed diagnostic is that it can be applied to a broad class of censored data models, including generalized linear models and other models with nonidentically distributed and nonadditive error structures. We illustrate the proposed model diagnostic for testing the adequacy of two parametric survival models for Space Shuttle main engine failures.  相似文献   

2.
3.
Tests are introduced which are designed to test for a nondecreasing ordered alternative among the survival functions of k populations consisting of multiple observations on each subject. Some of the observations could be right censored. A simulation study is conducted comparing the proposed tests on the basis of estimated power when the underlying distributions are multivariate normal. Equal sample sizes of 20 with 25% censoring, and 40 with both 25% and 50% censoring are considered for 3 and 4 populations. All of the tests hold their α‐values well. A recommendation is made as to the best overall test for the situations considered.  相似文献   

4.
Survival times of patients can be compared using rank tests in various experimental setups, including the two-sample case and the case of paired data. Attention is focussed on two frequently occurring complications in medical applications: censoring and tail alternatives. A review is given of the author's recent work on a new and simple class of censored rank tests. Various models for tail alternatives are discussed and the relation to censoring is demonstrated.  相似文献   

5.
The aim of this paper is to study the properties of the asymptotic variances of the maximum likelihood estimators of the parameters of the exponential mixture model with long-term survivors for randomly censored data. In addition, we study the asymptotic relative efficiency of these estimators versus those which would be obtained with complete follow-up. It is shown that fixed censoring at time T produces higher precision as well as higher asymptotic relative efficiency than those obtainable under uniform and uniform-exponential censoring distributions over (0, T). The results are useful in planning the size and duration of survival experiments with long-term survivors under random censoring schemes.  相似文献   

6.
Neurobehavioral tests are used to assess early neonatal behavioral functioning and detect effects of prenatal and perinatal events. However, common measurement and data collection methods create specific data features requiring thoughtful statistical analysis. Assessment response measurements are often ordinal scaled, not interval scaled; the magnitude of the physical response may not directly correlate with the underlying state of developmental maturity; and a subject's assessment record may be censored. Censoring occurs when the milestone is exhibited at the first test (left censoring), when the milestone is not exhibited before the end of the study (right censoring), or when the exact age of attaining the milestone is uncertain due to irregularly spaced test sessions or missing data (interval censoring). Such milestone data is best analyzed using survival analysis methods. Two methods are contrasted: the non-parametric Kaplan-Meier estimator and the fully parametric interval censored regression. The methods represent the spectrum of survival analyses in terms of parametric assumptions, ability to handle simultaneous testing of multiple predictors, and accommodation of different types of censoring. Both methods were used to assess birth weight status and sex effects on 14 separate test items from assessments on 255 healthy pigtailed macaques. The methods gave almost identical results. Compared to the normal birth weight group, the low birth weight group had significantly delayed development on all but one test item. Within the low birth weight group, males had significantly delayed development for some responses relative to females.  相似文献   

7.
The method of generalized pairwise comparisons (GPC) is an extension of the well-known nonparametric Wilcoxon–Mann–Whitney test for comparing two groups of observations. Multiple generalizations of Wilcoxon–Mann–Whitney test and other GPC methods have been proposed over the years to handle censored data. These methods apply different approaches to handling loss of information due to censoring: ignoring noninformative pairwise comparisons due to censoring (Gehan, Harrell, and Buyse); imputation using estimates of the survival distribution (Efron, Péron, and Latta); or inverse probability of censoring weighting (IPCW, Datta and Dong). Based on the GPC statistic, a measure of treatment effect, the “net benefit,” can be defined. It quantifies the difference between the probabilities that a randomly selected individual from one group is doing better than an individual from the other group. This paper aims at evaluating GPC methods for censored data, both in the context of hypothesis testing and estimation, and providing recommendations related to their choice in various situations. The methods that ignore uninformative pairs have comparable power to more complex and computationally demanding methods in situations of low censoring, and are slightly superior for high proportions (>40%) of censoring. If one is interested in estimation of the net benefit, Harrell's c index is an unbiased estimator if the proportional hazards assumption holds. Otherwise, the imputation (Efron or Peron) or IPCW (Datta, Dong) methods provide unbiased estimators in case of proportions of drop-out censoring up to 60%.  相似文献   

8.
A common testing problem for a life table or survival data is to test the equality of two survival distributions when the data is both grouped and censored. Several tests have been proposed in the literature which require various assumptions about the censoring distributions. It is shown that if these conditions are relaxed then the tests may no longer have the stated properties. The maximum likelihood test of equality when no assumptions are made about the censoring marginal distributions is derived. The properties of the test are found and it is compared to the existing tests. The fact that no assumptions are required about the censoring distributions make the test a useful initial testing procedure.  相似文献   

9.
The asymptotic equivalence of nonparametric tests and parametric tests based on rank-transformed data (CONOVER and IMAN , 1981) can be extended to the case of censoring. This paper presents generalized rank transformations for analyses of censored data, of interval-censored data and of survival data with uncertain causes of death. A Monte Carlo study and an analysis of leukemia remission times demonstrate excellent agreement of suggested procedures with GEHAN 'S (1965) and PRENTICE 'S (1978) tests.  相似文献   

10.
Interference occurs between individuals when the treatment (or exposure) of one individual affects the outcome of another individual. Previous work on causal inference methods in the presence of interference has focused on the setting where it is a priori assumed that there is “partial interference,” in the sense that individuals can be partitioned into groups wherein there is no interference between individuals in different groups. Bowers et al. (2012, Political Anal, 21, 97–124) and Bowers et al. (2016, Political Anal, 24, 395–403) consider randomization-based inferential methods that allow for more general interference structures in the context of randomized experiments. In this paper, extensions of Bowers et al. that allow for failure time outcomes subject to right censoring are proposed. Permitting right-censored outcomes is challenging because standard randomization-based tests of the null hypothesis of no treatment effect assume that whether an individual is censored does not depend on treatment. The proposed extension of Bowers et al. to allow for censoring entails adapting the method of Wang et al. (2010, Biostatistics, 11, 676–692) for two-sample survival comparisons in the presence of unequal censoring. The methods are examined via simulation studies and utilized to assess the effects of cholera vaccination in an individually randomized trial of 73 000 children and women in Matlab, Bangladesh.  相似文献   

11.
A robust two-sample Student t-type procedure based on symmetrically censored samples proposed by Tiku (1980, 1982a, b) is studied from the Bayesian point of view. The effect of asymmetric censoring on this procedure is investigated and a good approximation to its posterior distribution in this case is worked out. An illustrative example is also presented.  相似文献   

12.
Mandel M  Betensky RA 《Biometrics》2007,63(2):405-412
Several goodness-of-fit tests of a lifetime distribution have been suggested in the literature; many take into account censoring and/or truncation of event times. In some contexts, a goodness-of-fit test for the truncation distribution is of interest. In particular, better estimates of the lifetime distribution can be obtained when knowledge of the truncation law is exploited. In cross-sectional sampling, for example, there are theoretical justifications for the assumption of a uniform truncation distribution, and several studies have used it to improve the efficiency of their survival estimates. The duality of lifetime and truncation in the absence of censoring enables methods for testing goodness of fit of the lifetime distribution to be used for testing goodness of fit of the truncation distribution. However, under random censoring, this duality does not hold and different tests are required. In this article, we introduce several goodness-of-fit tests for the truncation distribution and investigate their performance in the presence of censored event times using simulation. We demonstrate the use of our tests on two data sets.  相似文献   

13.
Quantitative trait loci (QTL) are usually searched for using classical interval mapping methods which assume that the trait of interest follows a normal distribution. However, these methods cannot take into account features of most survival data such as a non-normal distribution and the presence of censored data. We propose two new QTL detection approaches which allow the consideration of censored data. One interval mapping method uses a Weibull model (W), which is popular in parametrical modelling of survival traits, and the other uses a Cox model (C), which avoids making any assumption on the trait distribution. Data were simulated following the structure of a published experiment. Using simulated data, we compare W, C and a classical interval mapping method using a Gaussian model on uncensored data (G) or on all data (G'=censored data analysed as though records were uncensored). An adequate mathematical transformation was used for all parametric methods (G, G' and W). When data were not censored, the four methods gave similar results. However, when some data were censored, the power of QTL detection and accuracy of QTL location and of estimation of QTL effects for G decreased considerably with censoring, particularly when censoring was at a fixed date. This decrease with censoring was observed also with G', but it was less severe. Censoring had a negligible effect on results obtained with the W and C methods.  相似文献   

14.
Summary Accurately assessing a patient’s risk of a given event is essential in making informed treatment decisions. One approach is to stratify patients into two or more distinct risk groups with respect to a specific outcome using both clinical and demographic variables. Outcomes may be categorical or continuous in nature; important examples in cancer studies might include level of toxicity or time to recurrence. Recursive partitioning methods are ideal for building such risk groups. Two such methods are Classification and Regression Trees (CART) and a more recent competitor known as the partitioning Deletion/Substitution/Addition (partDSA) algorithm, both of which also utilize loss functions (e.g., squared error for a continuous outcome) as the basis for building, selecting, and assessing predictors but differ in the manner by which regression trees are constructed. Recently, we have shown that partDSA often outperforms CART in so‐called “full data” settings (e.g., uncensored outcomes). However, when confronted with censored outcome data, the loss functions used by both procedures must be modified. There have been several attempts to adapt CART for right‐censored data. This article describes two such extensions for partDSA that make use of observed data loss functions constructed using inverse probability of censoring weights. Such loss functions are consistent estimates of their uncensored counterparts provided that the corresponding censoring model is correctly specified. The relative performance of these new methods is evaluated via simulation studies and illustrated through an analysis of clinical trial data on brain cancer patients. The implementation of partDSA for uncensored and right‐censored outcomes is publicly available in the R package, partDSA .  相似文献   

15.
Harrell's c‐index or concordance C has been widely used as a measure of separation of two survival distributions. In the absence of censored data, the c‐index estimates the Mann–Whitney parameter Pr(X>Y), which has been repeatedly utilized in various statistical contexts. In the presence of randomly censored data, the c‐index no longer estimates Pr(X>Y); rather, a parameter that involves the underlying censoring distributions. This is in contrast to Efron's maximum likelihood estimator of the Mann–Whitney parameter, which is recommended in the setting of random censorship.  相似文献   

16.
When comparing censored survival times for matched treated and control subjects, a late effect on survival is one that does not begin to appear until some time has passed. In a study of provider specialty in the treatment of ovarian cancer, a late divergence in the Kaplan–Meier survival curves hinted at superior survival among patients of gynecological oncologists, who employ chemotherapy less intensively, when compared to patients of medical oncologists, who employ chemotherapy more intensively; we ask whether this late divergence should be taken seriously. Specifically, we develop exact, permutation tests, and exact confidence intervals formed by inverting the tests, for late effects in matched pairs subject to random but heterogeneous censoring. Unlike other exact confidence intervals with censored data, the proposed intervals do not require knowledge of censoring times for patients who die. Exact distributions are consequences of two results about signs, signed ranks, and their conditional independence properties. One test, the late effects sign test, has the binomial distribution; the other, the late effects signed rank test, uses nonstandard ranks but nonetheless has the same exact distribution as Wilcoxon's signed rank test. A simulation shows that the late effects signed rank test has substantially more power to detect late effects than do conventional tests. The confidence statement provides information about both the timing and magnitude of late effects (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right‐censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit‐of‐detection. For randomly censored covariates, an often‐used method is the inefficient complete‐case analysis (CCA) which consists in deleting censored observations in the data analysis. When censoring is not completely independent, the CCA leads to biased and spurious results. Methods for missing covariate data, including type I and type II covariate censoring as well as limit‐of‐detection do not readily apply due to the fundamentally different nature of randomly censored covariates. We develop a novel method for censored covariates using a conditional mean imputation based on either Kaplan–Meier estimates or a Cox proportional hazards model to estimate the effects of these covariates on a time‐to‐event outcome. We evaluate the performance of the proposed method through simulation studies and show that it provides good bias reduction and statistical efficiency. Finally, we illustrate the method using data from the Framingham Heart Study to assess the relationship between offspring and parental age of onset of cardiovascular events.  相似文献   

18.
Wei Pan 《Biometrics》2001,57(4):1245-1250
Sun, Liao, and Pagano (1999) proposed an interesting estimating equation approach to Cox regression with doubly censored data. Here we point out that a modification of their proposal leads to a multiple imputation approach, where the double censoring is reduced to single censoring by imputing for the censored initiating times. For each imputed data set one can take advantage of many existing techniques and software for singly censored data. Under the general framework of multiple imputation, the proposed method is simple to implement and can accommodate modeling issues such as model checking, which has not been adequately discussed previously in the literature for doubly censored data. Here we illustrate our method with an application to a formal goodness-of-fit test and a graphical check for the proportional hazards model for doubly censored data. We reanalyze a well-known AIDS data set.  相似文献   

19.
A simple, closed-form jackknife estimate of the actual variance of the Mann-Whitney-Wilcoxon statistic, as opposed to the standard permutational variance under the test's null hypothesis has been derived which permits avoiding anticonservative performance in the presence of heterosce-dasticity. The formulation given allows modifications of the exponential scores test, of censored data tests by Gehan (1965), Peto & Peto (1977) and Prentice (1978), of tests for monotonic τ association by Kendall (1962) and for tests of ordered k-sample hypotheses. A Monte Carlo study supports recommendations for the jackknife procedures, but also shows their limited advantages in exponential scores and censored data versions. Thus, the paper extends results by Fligner & Policello (1981).  相似文献   

20.
K K Lan  J T Wittes 《Biometrics》1985,41(4):1063-1069
Several commonly used two-sample linear rank tests for censored data have been shown to be closely linked in mathematical structure. This pedagogical paper elucidates the relationships among the tests by introducing a videogame played between a boys' team and a girls' team with different rules of payment by the consecutive losers considered. Depending on the rule of payment, the total amount of money the girls win as a team becomes the Wilcoxon statistic, the Savage statistic, or their generalizations under censoring.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号