共查询到20条相似文献,搜索用时 11 毫秒
1.
We introduce two test procedures for comparing two survival distributions on the basis of randomly right-censored data consisting of both paired and unpaired observations. Our procedures are based on generalizations of a pooled rank test statistic previously proposed for uncensored data. One generalization adapts the Prentice-Wilcoxon score, while the other adapts the Akritas score. The use of these particular scoring systems in pooled rank tests with randomly right-censored paired data has been advocated by several researchers. Our test procedures utilize the permutation distributions of the test statistics based on a novel manner of permuting the scores. Permutation versions of tests for right-censored paired data and for two independent right-censored samples that use the proposed scoring systems are obtained as special cases of our test procedures. Simulation results show that our test procedures have high power for detecting scale and location shifts in exponential and log-logistic distributions for the survival times. We also demonstrate the advantages of our test procedures in terms of utilizing randomly occurring unpaired observations that are discarded in test procedures for paired data. The tests are applied to skin graft data previously reported elsewhere. 相似文献
2.
Spatial scan statistics with Bernoulli and Poisson models are commonly used for geographical disease surveillance and cluster detection. These models, suitable for count data, were not designed for data with continuous outcomes. We propose a spatial scan statistic based on an exponential model to handle either uncensored or censored continuous survival data. The power and sensitivity of the developed model are investigated through intensive simulations. The method performs well for different survival distribution functions including the exponential, gamma, and log-normal distributions. We also present a method to adjust the analysis for covariates. The cluster detection method is illustrated using survival data for men diagnosed with prostate cancer in Connecticut from 1984 to 1995. 相似文献
3.
4.
Sample size calculations for survival trials typically include an adjustment to account for the expected rate of noncompliance, or discontinuation from study medication. Existing sample size methods assume that when patients discontinue, they do so independently of their risk of an endpoint; that is, that noncompliance is noninformative. However, this assumption is not always true, as we illustrate using results from a published clinical trial database. In this article, we introduce a modified version of the method proposed by Lakatos (1988, Biometrics 44, 229-241) that can be used to calculate sample size under informative noncompliance. This method is based on the concept of two subpopulations: one with high rates of endpoint and discontinuation and another with low rates. Using this new method, we show that failure to consider the impact of informative noncompliance can lead to a considerably underpowered study. 相似文献
5.
6.
7.
8.
The selection of a single method of analysis is problematic when the data could have been generated by one of several possible models. We examine the properties of two tests designed to have high power over a range of models. The first one, the maximum efficiency robust test (MERT), uses the linear combination of the optimal statistics for each model that maximizes the minimum efficiency. The second procedure, called the MX, uses the maximum of the optimal statistics. Both approaches yield efficiency robust procedures for survival analysis and ordinal categorical data. Guidelines for choosing between them are provided. 相似文献
9.
10.
11.
Tan WY 《Biometrical journal. Biometrische Zeitschrift》1981,23(5):467-475
"Consider a two organs system for which one of the two organs must function for the individual to survive. This paper derives the survival probability distributions under more general situations by adopting [the] Markov process approach, thus providing extensions of results given in Gross, Clark and Liu (1971) and Kodlin (1967). The results obtained are then extended to a k-organs system." 相似文献
12.
13.
Confidence bands for a survival curve from censored data 总被引:3,自引:0,他引:3
14.
15.
Kaplan-Meier curves provide an effective means of presenting the distributional pattern in a sample of survival data. However, in order to assess the effect of a covariate, a standard scatterplot is often difficult to interpret because of the presence of censored observations. Several authors have proposed a running median as an effective way of indicating the effect of a covariate. This article proposes a form of kernel estimation, employing double smoothing, that can be applied in a simple and efficient manner to construct an estimator of a percentile of the survival distribution as a function of one or two covariates. Permutations and bootstrap samples can be used to construct reference bands that help identify whether particular features of the estimates indicate real features of the underlying curve or whether this may be due simply to random variation. The techniques are illustrated on data from a study of kidney transplant patients. 相似文献
16.
Hougaard P 《Biometrics》1999,55(1):13-22
Survival data stand out as a special statistical field. This paper tries to describe what survival data is and what makes it so special. Survival data concern times to some events. A key point is the successive observation of time, which on the one hand leads to some times not being observed so that all that is known is that they exceed some given times (censoring), and on the other hand implies that predictions regarding the future course should be conditional on the present status (truncation). In the simplest case, this condition is that the individual is alive. The successive conditioning makes the hazard function, which describes the probability of an event happening during a short interval given that the individual is alive today (or more generally able to experience the event), the most relevant concept. Standard distributions available (normal, log-normal, gamma, inverse Gaussian, and so forth) can account for censoring and truncation, but this is cumbersome. Besides, they fit badly because they are either symmetric or right skewed, but survival time distributions can easily be left-skewed positive variables. A few distributions satisfying these requirements are available, but often nonparametric methods are preferable as they account better conceptually for truncation and censoring and give a better fit. Finally, we compare the proportional hazards regression models with accelerated failure time models. 相似文献
17.
18.
19.
20.
A kernel method for incorporating information on disease progression in the analysis of survival 总被引:1,自引:0,他引:1
This paper considers incorporating information on disease progressionin the analysis of survival. A three-state model is assumed,with the distribution of each transition estimated separately.The distribution of survival following progression can dependon the time of progression. Kernel methods are used to giveconsistent estimators under general forms of dependence. Theestimators for the individual transitions are then combinedinto an overall estimator of the survival distribution. A teststatistic for equality of survival between treatment groupsis proposed based on the tests of Pepe & Fleming (1989,1991). In simulations the kernel method successfully incorporateddependence on the time of progression in some reasonable settings,but under extreme forms of dependence the tests had substantialbias. If survival beyond progression can be predicted fairlyaccurately, then gains in power over standard methods that ignoreprogression can be substantial, but the gains are smaller whensurvival beyond progression is more variable. The methodologyis illustrated with an application to a breast cancer clinicaltrial. 相似文献