共查询到20条相似文献,搜索用时 0 毫秒
1.
Summary . A simple approach to the estimation of the median residual lifetime is proposed for a single group by inverting a function of the Kaplan–Meier estimators. A test statistic is proposed to compare two median residual lifetimes at any fixed time point. The test statistic does not involve estimation of the underlying probability density function of failure times under censoring. Extensive simulation studies are performed to validate the proposed test statistic in terms of type I error probabilities and powers at various time points. One of the oldest data sets from the National Surgical Adjuvant Breast and Bowel Project (NSABP), which has more than a quarter century of follow-up, is used to illustrate the method. The analysis results indicate that, without systematic post-operative therapy, a significant difference in median residual lifetimes between node-negative and node-positive breast cancer patients persists for about 10 years after surgery. The new estimates of the median residual lifetime could serve as a baseline for physicians to explain any incremental effects of post-operative treatments in terms of delaying breast cancer recurrence or prolonging remaining lifetimes of breast cancer patients. 相似文献
2.
3.
We consider a regression model where the error term is assumed to follow a type of asymmetric Laplace distribution. We explore its use in the estimation of conditional quantiles of a continuous outcome variable given a set of covariates in the presence of random censoring. Censoring may depend on covariates. Estimation of the regression coefficients is carried out by maximizing a non‐differentiable likelihood function. In the scenarios considered in a simulation study, the Laplace estimator showed correct coverage and shorter computation time than the alternative methods considered, some of which occasionally failed to converge. We illustrate the use of Laplace regression with an application to survival time in patients with small cell lung cancer. 相似文献
4.
5.
6.
Hougaard P 《Biometrics》1999,55(1):13-22
Survival data stand out as a special statistical field. This paper tries to describe what survival data is and what makes it so special. Survival data concern times to some events. A key point is the successive observation of time, which on the one hand leads to some times not being observed so that all that is known is that they exceed some given times (censoring), and on the other hand implies that predictions regarding the future course should be conditional on the present status (truncation). In the simplest case, this condition is that the individual is alive. The successive conditioning makes the hazard function, which describes the probability of an event happening during a short interval given that the individual is alive today (or more generally able to experience the event), the most relevant concept. Standard distributions available (normal, log-normal, gamma, inverse Gaussian, and so forth) can account for censoring and truncation, but this is cumbersome. Besides, they fit badly because they are either symmetric or right skewed, but survival time distributions can easily be left-skewed positive variables. A few distributions satisfying these requirements are available, but often nonparametric methods are preferable as they account better conceptually for truncation and censoring and give a better fit. Finally, we compare the proportional hazards regression models with accelerated failure time models. 相似文献
7.
In multivariate failure time data analysis, a marginal regression modeling approach is often preferred to avoid assumptions on the dependence structure among correlated failure times. In this paper, a marginal mixed baseline hazards model is introduced. Estimating equations are proposed for the estimation of the marginal hazard ratio parameters. The proposed estimators are shown to be consistent and asymptotically Gaussian with a robust covariance matrix that can be consistently estimated. Simulation studies indicate the adequacy of the proposed methodology for practical sample sizes. The methodology is illustrated with a data set from the Framingham Heart Study. 相似文献
8.
The assessment of overall homogeneity of time‐to‐event curves is a key element in survival analysis. The currently commonly used methods, e.g., log‐rank and Wilcoxon tests, may have a significant loss of statistical testing power under certain circumstances. In this paper a new statistical testing approach is developed to compare the overall homogeneity of survival curves. The proposed new method has greater power than the commonly used tests to detect overall differences between crossing survival curves. The small‐sample performance of the new test is investigated under a variety of situations by means of Monte Carlo simulations. Furthermore, the applicability of the proposed testing approach is illustrated by a real data example from a kidney dialysis trial. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
9.
Rosenbaum PR 《Biometrics》1999,55(2):560-564
When a treatment has a dilated effect, with larger effects when responses are higher, there can be much less sensitivity to bias at upper quantiles than at lower quantiles; i.e., small, plausible hidden biases might explain the ostensible effect of the treatment for many subjects, and yet only quite large hidden biases could explain the effect on a few subjects having dramatically elevated responses. An example concerning kidney function of cadmium workers is discussed in detail. In that example, the treatment effect is far from additive: It is plausibly zero at the lower quartile of responses to control, and it is large and fairly insensitive to bias at the upper quartile. 相似文献
10.
11.
This paper discusses the application of randomization tests to censored survival distributions. The three types of censoring considered are those designated by MILLER (1981) as Type 1 (fixed time termination), Type 2 (termination of experiment at r-th failure), and random censoring. Examples utilize the Gehan scoring procedure. Randomization tests for which computer programs already exist can be applied to a variety of experimental designs, regardless of the presence of censored observations. 相似文献
12.
Partial residuals for the proportional hazards regression model 总被引:34,自引:0,他引:34
13.
14.
Summary We present a novel semiparametric survival model with a log‐linear median regression function. As a useful alternative to existing semiparametric models, our large model class has many important practical advantages, including interpretation of the regression parameters via the median and the ability to address heteroscedasticity. We demonstrate that our modeling technique facilitates the ease of prior elicitation and computation for both parametric and semiparametric Bayesian analysis of survival data. We illustrate the advantages of our modeling, as well as model diagnostics, via a reanalysis of a small‐cell lung cancer study. Results of our simulation study provide further support for our model in practice. 相似文献
15.
16.
17.
We describe existing tests and introduce two new tests concerning the value of a survival function. These tests may be used to construct a confidence interval for the survival probability at a given time or for a quantile of the survival distribution. Simulation studies show that error rates can differ substantially from their nominal values, particularly at survival probabilities close to zero or one. We recommend our new constrained bootstrap test for its good overall performance. 相似文献
18.
19.
Additive hazards regression for case-cohort studies 总被引:3,自引:0,他引:3
20.
The field of survival analysis emerged in the 20th century and experienced tremendous growth during the latter half of the century. The developments in this field that have had the most profound impact on clinical trials are the Kaplan-Meier (1958, Journal of the American Statistical Association 53, 457-481) method for estimating the survival function, the log-rank statistic (Mantel, 1966, Cancer Chemotherapy Report 50, 163-170) for comparing two survival distributions, and the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-220) proportional hazards model for quantifying the effects of covariates on the survival time. The counting-process martingale theory pioneered by Aalen (1975, Statistical inference for a family of counting processes, Ph.D. dissertation, University of California, Berkeley) provides a unified framework for studying the small- and large-sample properties of survival analysis statistics. Significant progress has been achieved and further developments are expected in many other areas, including the accelerated failure time model, multivariate failure time data, interval-censored data, dependent censoring, dynamic treatment regimes and causal inference, joint modeling of failure time and longitudinal data, and Baysian methods. 相似文献