首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper censored data rank location estimators are obtained by using censored one-sample rank test statistics of the location parameter and censored two-sample rank test statistics of the shift of location parameter. Also, methods for constructing censored small sample confidence intervals and asymptotic confidence intervals for the location are considered. Generalizations of the solutions from uncensored one-sample and two-sample rank tests are utilized.  相似文献   

2.
This paper discusses two sample nonparametric comparison of survival functions when only interval‐censored failure time data are available. The problem considered often occurs in, for example, biological and medical studies such as medical follow‐up studies and clinical trials. For the problem, we present and study several nonparametric test procedures that include methods based on both absolute and squared survival differences as well as simple survival differences. The presented tests provide alternatives to existing methods, most of which are rank‐based tests and not sensitive to nonproportional or nonmonotone alternatives. Simulation studies are performed to evaluate and compare the proposed methods with existing methods and suggest that the proposed tests work well for nonmonotone alternatives as well as monotone alternatives. An illustrative example is presented.  相似文献   

3.
This paper discusses two‐sample comparison in the case of interval‐censored failure time data. For the problem, one common approach is to employ some nonparametric test procedures, which usually give some p‐values but not a direct or exact quantitative measure of the survival or treatment difference of interest. In particular, these procedures cannot provide a hazard ratio estimate, which is commonly used to measure the difference between the two treatments or samples. For interval‐censored data, a few nonparametric test procedures have been developed, but it does not seem to exist as a procedure for hazard ratio estimation. Corresponding to this, we present two procedures for nonparametric estimation of the hazard ratio of the two samples for interval‐censored data situations. They are generalizations of the corresponding procedures for right‐censored failure time data. An extensive simulation study is conducted to evaluate the performance of the two procedures and indicates that they work reasonably well in practice. For illustration, they are applied to a set of interval‐censored data arising from a breast cancer study.  相似文献   

4.
Cui J 《Biometrics》1999,55(2):345-349
This paper proposes a nonparametric method for estimating a delay distribution based on left-censored and right-truncated data. A variance-covariance estimator is provided. The method is applied to the Australian AIDS data in which some data are left censored and some data are not left censored. This situation arises with AIDS case-reporting data in Australia because reporting delays were recorded only from November 1990 rather than from the beginning of the epidemic there. It is shown that inclusion of the left-censored data, as opposed to analyzing only the uncensored data, improves the precision of the estimate.  相似文献   

5.
A simple, closed-form jackknife estimate of the actual variance of the Mann-Whitney-Wilcoxon statistic, as opposed to the standard permutational variance under the test's null hypothesis has been derived which permits avoiding anticonservative performance in the presence of heterosce-dasticity. The formulation given allows modifications of the exponential scores test, of censored data tests by Gehan (1965), Peto & Peto (1977) and Prentice (1978), of tests for monotonic τ association by Kendall (1962) and for tests of ordered k-sample hypotheses. A Monte Carlo study supports recommendations for the jackknife procedures, but also shows their limited advantages in exponential scores and censored data versions. Thus, the paper extends results by Fligner & Policello (1981).  相似文献   

6.
Summary Accurately assessing a patient’s risk of a given event is essential in making informed treatment decisions. One approach is to stratify patients into two or more distinct risk groups with respect to a specific outcome using both clinical and demographic variables. Outcomes may be categorical or continuous in nature; important examples in cancer studies might include level of toxicity or time to recurrence. Recursive partitioning methods are ideal for building such risk groups. Two such methods are Classification and Regression Trees (CART) and a more recent competitor known as the partitioning Deletion/Substitution/Addition (partDSA) algorithm, both of which also utilize loss functions (e.g., squared error for a continuous outcome) as the basis for building, selecting, and assessing predictors but differ in the manner by which regression trees are constructed. Recently, we have shown that partDSA often outperforms CART in so‐called “full data” settings (e.g., uncensored outcomes). However, when confronted with censored outcome data, the loss functions used by both procedures must be modified. There have been several attempts to adapt CART for right‐censored data. This article describes two such extensions for partDSA that make use of observed data loss functions constructed using inverse probability of censoring weights. Such loss functions are consistent estimates of their uncensored counterparts provided that the corresponding censoring model is correctly specified. The relative performance of these new methods is evaluated via simulation studies and illustrated through an analysis of clinical trial data on brain cancer patients. The implementation of partDSA for uncensored and right‐censored outcomes is publicly available in the R package, partDSA .  相似文献   

7.
This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right‐censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit‐of‐detection. For randomly censored covariates, an often‐used method is the inefficient complete‐case analysis (CCA) which consists in deleting censored observations in the data analysis. When censoring is not completely independent, the CCA leads to biased and spurious results. Methods for missing covariate data, including type I and type II covariate censoring as well as limit‐of‐detection do not readily apply due to the fundamentally different nature of randomly censored covariates. We develop a novel method for censored covariates using a conditional mean imputation based on either Kaplan–Meier estimates or a Cox proportional hazards model to estimate the effects of these covariates on a time‐to‐event outcome. We evaluate the performance of the proposed method through simulation studies and show that it provides good bias reduction and statistical efficiency. Finally, we illustrate the method using data from the Framingham Heart Study to assess the relationship between offspring and parental age of onset of cardiovascular events.  相似文献   

8.
Tests are introduced which are designed to test for a nondecreasing ordered alternative among the survival functions of k populations consisting of multiple observations on each subject. Some of the observations could be right censored. A simulation study is conducted comparing the proposed tests on the basis of estimated power when the underlying distributions are multivariate normal. Equal sample sizes of 20 with 25% censoring, and 40 with both 25% and 50% censoring are considered for 3 and 4 populations. All of the tests hold their α‐values well. A recommendation is made as to the best overall test for the situations considered.  相似文献   

9.
The sensitivity and specificity of markers for event times   总被引:1,自引:0,他引:1  
The statistical literature on assessing the accuracy of risk factors or disease markers as diagnostic tests deals almost exclusively with settings where the test, Y, is measured concurrently with disease status D. In practice, however, disease status may vary over time and there is often a time lag between when the marker is measured and the occurrence of disease. One example concerns the Framingham risk score (FR-score) as a marker for the future risk of cardiovascular events, events that occur after the score is ascertained. To evaluate such a marker, one needs to take the time lag into account since the predictive accuracy may be higher when the marker is measured closer to the time of disease occurrence. We therefore consider inference for sensitivity and specificity functions that are defined as functions of time. Semiparametric regression models are proposed. Data from a cohort study are used to estimate model parameters. One issue that arises in practice is that event times may be censored. In this research, we extend in several respects the work by Leisenring et al. (1997) that dealt only with parametric models for binary tests and uncensored data. We propose semiparametric models that accommodate continuous tests and censoring. Asymptotic distribution theory for parameter estimates is developed and procedures for making statistical inference are evaluated with simulation studies. We illustrate our methods with data from the Cardiovascular Health Study, relating the FR-score measured at enrollment to subsequent risk of cardiovascular events.  相似文献   

10.
Weighted logrank testing procedures for comparing r treatments with a control when some of the data are randomly censored are discussed. Four kinds of test statistics for the simple tree alternatives are considered. The weighted logrank statistics based on pairwise ranking scheme is proposed and the covariances of the test statistics are explicitly obtained. This class of test statistics can be viewed as the general statistics of constructing the test procedures for various order restricted alternatives by modifying weights. Four kinds of weighted logrank tests are illustrated with an example. Simulation studies are performed to compare the sizes and the powers of the considered tests with the other.  相似文献   

11.
In risk assessment and environmental monitoring studies, concentration measurements frequently fall below detection limits (DL) of measuring instruments, resulting in left-censored data. The principal approaches for handling censored data include the substitution-based method, maximum likelihood estimation, robust regression on order statistics, and Kaplan-Meier. In practice, censored data are substituted with an arbitrary value prior to use of traditional statistical methods. Although some studies have evaluated the substitution performance in estimating population characteristics, they have focused mainly on normally and lognormally distributed data that contain a single DL. We employ Monte Carlo simulations to assess the impact of substitution when estimating population parameters based on censored data containing multiple DLs. We also consider different distributional assumptions including lognormal, Weibull, and gamma. We show that the reliability of the estimates after substitution is highly sensitive to distributional characteristics such as mean, standard deviation, skewness, and also data characteristics such as censoring percentage. The results highlight that although the performance of the substitution-based method improves as the censoring percentage decreases, its performance still depends on the population's distributional characteristics. Practical implications that follow from our findings indicate that caution must be taken in using the substitution method when analyzing censored environmental data.  相似文献   

12.
Quantitative trait loci (QTL) are usually searched for using classical interval mapping methods which assume that the trait of interest follows a normal distribution. However, these methods cannot take into account features of most survival data such as a non-normal distribution and the presence of censored data. We propose two new QTL detection approaches which allow the consideration of censored data. One interval mapping method uses a Weibull model (W), which is popular in parametrical modelling of survival traits, and the other uses a Cox model (C), which avoids making any assumption on the trait distribution. Data were simulated following the structure of a published experiment. Using simulated data, we compare W, C and a classical interval mapping method using a Gaussian model on uncensored data (G) or on all data (G'=censored data analysed as though records were uncensored). An adequate mathematical transformation was used for all parametric methods (G, G' and W). When data were not censored, the four methods gave similar results. However, when some data were censored, the power of QTL detection and accuracy of QTL location and of estimation of QTL effects for G decreased considerably with censoring, particularly when censoring was at a fixed date. This decrease with censoring was observed also with G', but it was less severe. Censoring had a negligible effect on results obtained with the W and C methods.  相似文献   

13.
Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the brain. Non-human animals have perceptual organization and functional lateralization that are comparable to that of humans. Here we trained the domestic chick in a concurrent discrimination task involving hierarchical stimuli. Then, we evaluated the animals for analytic and holistic encoding strategies in a series of transformational tests by relying on a monocular occlusion technique. A local precedence emerged in both the left and the right hemisphere, adding further evidence in favour of analytic processing in non-human animals.  相似文献   

14.
This paper discusses the application of randomization tests to censored survival distributions. The three types of censoring considered are those designated by MILLER (1981) as Type 1 (fixed time termination), Type 2 (termination of experiment at r-th failure), and random censoring. Examples utilize the Gehan scoring procedure. Randomization tests for which computer programs already exist can be applied to a variety of experimental designs, regardless of the presence of censored observations.  相似文献   

15.
Variance-component (VC) methods are flexible and powerful procedures for the mapping of genes that influence quantitative traits. However, traditional VC methods make the critical assumption that the quantitative-trait data within a family either follow or can be transformed to follow a multivariate normal distribution. Violation of the multivariate normality assumption can occur if trait data are censored at some threshold value. Trait censoring can arise in a variety of ways, including assay limitation or confounding due to medication. Valid linkage analyses of censored data require the development of a modified VC method that directly models the censoring event. Here, we present such a model, which we call the "tobit VC method." Using simulation studies, we compare and contrast the performance of the traditional and tobit VC methods for linkage analysis of censored trait data. For the simulation settings that we considered, our results suggest that (1) analyses of censored data by using the traditional VC method lead to severe bias in parameter estimates and a modest increase in false-positive linkage findings, (2) analyses with the tobit VC method lead to unbiased parameter estimates and type I error rates that reflect nominal levels, and (3) the tobit VC method has a modest increase in linkage power as compared with the traditional VC method. We also apply the tobit VC method to censored data from the Finland-United States Investigation of Non-Insulin-Dependent Diabetes Mellitus Genetics study and provide two examples in which the tobit VC method yields noticeably different results as compared with the traditional method.  相似文献   

16.
M S Pepe  T R Fleming 《Biometrics》1989,45(2):497-507
A class of statistics based on the integrated weighted difference in Kaplan-Meier estimators is introduced for the two-sample censored data problem. With positive weight functions these statistics are intuitive for and sensitive against the alternative of stochastic ordering. The standard weighted log-rank statistics are not always sensitive against this alternative, particularly if the hazard functions cross. Qualitative comparisons are made between the weighted log-rank statistics and these weighted Kaplan-Meier (WKM) statistics. A statement of null asymptotic distribution theory is given and the choice of weight function is discussed in some detail. Results from small-sample simulation studies indicate that these statistics compare favorably with the log-rank procedure even under the proportional hazards alternative, and may perform better than it under the crossing hazards alternative.  相似文献   

17.
Continuous time Markov chain (CTMC) models offer ethologists a powerful tool. The methods are based on well-established procedures for estimating the rates at which one state (e.g. resting) changes to some other set of states (e.g. feeding, fighting, etc.). Unfortunately, ethological data typically differ in a very critical manner from the type of data to which these methods are usually applied: ethological data are usually heavily censored in the sense that each behavioral state shows frequent transitions to several other possible states. This occurs when several competing processes can each end a bout.
We used computer simulation of various behavioral models with known transition rates to investigate the unknown performance of four of the most popular statistical tests for screening data prior to application of CTMC models; this included a modification of one of these tests derived under the assumption of random censoring. Two of the four tests failed completely and would result in rejection of nearly all data even if the model did fit the assumptions of the CTMC methods. Only Barlow's total-time-on test performed with an acceptable α error rate under all conditions. None of the tests were particularly effective at detecting certain types of departures from the CTMC assumptions.
Guidelines are given as to how much confidence should be attached to apparent changes in transition rates.  相似文献   

18.
MOTIVATION: A common task in microarray data analysis consists of identifying genes associated with a phenotype. When the outcomes of interest are censored time-to-event data, standard approaches assess the effect of genes by fitting univariate survival models. In this paper, we propose a Bayesian variable selection approach, which allows the identification of relevant markers by jointly assessing sets of genes. We consider accelerated failure time (AFT) models with log-normal and log-t distributional assumptions. A data augmentation approach is used to impute the failure times of censored observations and mixture priors are used for the regression coefficients to identify promising subsets of variables. The proposed method provides a unified procedure for the selection of relevant genes and the prediction of survivor functions. RESULTS: We demonstrate the performance of the method on simulated examples and on several microarray datasets. For the simulation study, we consider scenarios with large number of noisy variables and different degrees of correlation between the relevant and non-relevant (noisy) variables. We are able to identify the correct covariates and obtain good prediction of the survivor functions. For the microarray applications, some of our selected genes are known to be related to the diseases under study and a few are in agreement with findings from other researchers. AVAILABILITY: The Matlab code for implementing the Bayesian variable selection method may be obtained from the corresponding author. CONTACT: mvannucci@stat.tamu.edu SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

19.
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.  相似文献   

20.
When comparing censored survival times for matched treated and control subjects, a late effect on survival is one that does not begin to appear until some time has passed. In a study of provider specialty in the treatment of ovarian cancer, a late divergence in the Kaplan–Meier survival curves hinted at superior survival among patients of gynecological oncologists, who employ chemotherapy less intensively, when compared to patients of medical oncologists, who employ chemotherapy more intensively; we ask whether this late divergence should be taken seriously. Specifically, we develop exact, permutation tests, and exact confidence intervals formed by inverting the tests, for late effects in matched pairs subject to random but heterogeneous censoring. Unlike other exact confidence intervals with censored data, the proposed intervals do not require knowledge of censoring times for patients who die. Exact distributions are consequences of two results about signs, signed ranks, and their conditional independence properties. One test, the late effects sign test, has the binomial distribution; the other, the late effects signed rank test, uses nonstandard ranks but nonetheless has the same exact distribution as Wilcoxon's signed rank test. A simulation shows that the late effects signed rank test has substantially more power to detect late effects than do conventional tests. The confidence statement provides information about both the timing and magnitude of late effects (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号