首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper discusses the application of randomization tests to censored survival distributions. The three types of censoring considered are those designated by MILLER (1981) as Type 1 (fixed time termination), Type 2 (termination of experiment at r-th failure), and random censoring. Examples utilize the Gehan scoring procedure. Randomization tests for which computer programs already exist can be applied to a variety of experimental designs, regardless of the presence of censored observations.  相似文献   

2.
3.
Summary We present a novel semiparametric survival model with a log‐linear median regression function. As a useful alternative to existing semiparametric models, our large model class has many important practical advantages, including interpretation of the regression parameters via the median and the ability to address heteroscedasticity. We demonstrate that our modeling technique facilitates the ease of prior elicitation and computation for both parametric and semiparametric Bayesian analysis of survival data. We illustrate the advantages of our modeling, as well as model diagnostics, via a reanalysis of a small‐cell lung cancer study. Results of our simulation study provide further support for our model in practice.  相似文献   

4.
A common testing problem for a life table or survival data is to test the equality of two survival distributions when the data is both grouped and censored. Several tests have been proposed in the literature which require various assumptions about the censoring distributions. It is shown that if these conditions are relaxed then the tests may no longer have the stated properties. The maximum likelihood test of equality when no assumptions are made about the censoring marginal distributions is derived. The properties of the test are found and it is compared to the existing tests. The fact that no assumptions are required about the censoring distributions make the test a useful initial testing procedure.  相似文献   

5.
In the analysis of survival data with parametric models, it is well known that the Weibull model is not suitable for modeling cases where the hazard rate is non-monotonic. For such cases, log-logistic model is frequently used. However, due to the symmetric property of the log-logistic model, it may be poor for the cases where the hazard rate is skewed or heavily tailed. In this paper, we suggest a generalization of the log-logistic model by introducing a shape parameter. This generalized model is then applied to fit the lung cancer data of Prentice (1973). The results seem to improve over those obtained by using the log-logistic model.  相似文献   

6.
The assessment of overall homogeneity of time‐to‐event curves is a key element in survival analysis. The currently commonly used methods, e.g., log‐rank and Wilcoxon tests, may have a significant loss of statistical testing power under certain circumstances. In this paper a new statistical testing approach is developed to compare the overall homogeneity of survival curves. The proposed new method has greater power than the commonly used tests to detect overall differences between crossing survival curves. The small‐sample performance of the new test is investigated under a variety of situations by means of Monte Carlo simulations. Furthermore, the applicability of the proposed testing approach is illustrated by a real data example from a kidney dialysis trial. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
Statistical procedures and methodology for assessment of interventions or treatments based on medical data often involves complexities due to incompleteness of the available data as a result of drop out or the inability of complete follow up until the endpoint of interest. In this article we propose a nonparametric regression model based on censored data when we are concerned with investigation of the simultaneous effects of the two or more factors. Specifically, we will assess the effect of a treatment (dose) and a covariate (e.g., age categories) on the mean survival time of subjects assigned to combinations of the levels of these factors. The proposed method allows for varying levels of censorship in the outcome among different groups of subjects at different levels of the independent variables (factors). We derive the asymptotic distribution of the estimators of the parameters in our model, which then allows for statistical inference. Finally, through a simulation study we assess the effect of the censoring rates on the standard error of these types of estimators. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
This paper discusses two sample nonparametric comparison of survival functions when only interval‐censored failure time data are available. The problem considered often occurs in, for example, biological and medical studies such as medical follow‐up studies and clinical trials. For the problem, we present and study several nonparametric test procedures that include methods based on both absolute and squared survival differences as well as simple survival differences. The presented tests provide alternatives to existing methods, most of which are rank‐based tests and not sensitive to nonproportional or nonmonotone alternatives. Simulation studies are performed to evaluate and compare the proposed methods with existing methods and suggest that the proposed tests work well for nonmonotone alternatives as well as monotone alternatives. An illustrative example is presented.  相似文献   

9.
10.
A model is discussed for incorporating information from a time-dependent covariable (an intervening event) and covariables independent of time into the analysis of survival data. In the model, it is assumed that individuals are potentially subject to two paths to failure, one including the intervening event and the other not. Additional assumptions are that failure times associated with the two paths are independent and that the time to failure subsequent to the intervening event is dependent on the intervening event time. Allowing the underlying hazard rates for the model to follow a WEIBULL form, use of the model and methods for fitting and hypothesis testing are illustrated by application to a follow-up study involving industrial workers where disability retirement was the intervening event. Extensions of the model to accommodate grouped survival data are presented.  相似文献   

11.
The stratified Cox proportional hazards model is introduced to incorporate covariates and involve nonproportional treatment effect of two groups into the analysis and then the confidence interval estimators for the difference in median survival times of two treatments in stratified Cox model are proposed. The one is based on baseline survival functions of two groups, and the other on average survival functions of two groups. I illustrate the proposed methods with an example from a study conducted by the Radiation Therapy Oncology Group in cancer of the mouth and throat. Simulations are carried out to investigate the small‐sample properties of proposed methods in terms of coverage rates.  相似文献   

12.
This article deals with the problem of comparing two populations with respect to the distribution of the gap time between two successive events when each subject can experience a series of events and when the event times are potentially right censored. Several families of nonparametric tests are developed, all of which allow arbitrary distributions and dependence structures for the serial events. The asymptotic and small-sample properties of the proposed tests are investigated. An illustration with data taken from a colon cancer study is provided. The related problem of testing the independence of two successive gap times is also studied.  相似文献   

13.
Summary Cook, Gold, and Li (2007, Biometrics 63, 540–549) extended the Kulldorff (1997, Communications in Statistics 26, 1481–1496) scan statistic for spatial cluster detection to survival‐type observations. Their approach was based on the score statistic and they proposed a permutation distribution for the maximum of score tests. The score statistic makes it possible to apply the scan statistic idea to models including explanatory variables. However, we show that the permutation distribution requires strong assumptions of independence between potential cluster and both censoring and explanatory variables. In contrast, we present an approach using the asymptotic distribution of the maximum of score statistics in a manner not requiring these assumptions.  相似文献   

14.
Summary Genetic association studies often investigate the effect of haplotypes on an outcome of interest. Haplotypes are not observed directly, and this complicates the inclusion of such effects in survival models. We describe a new estimating equations approach for Cox's regression model to assess haplotype effects for survival data. These estimating equations are simple to implement and avoid the use of the EM algorithm, which may be slow in the context of the semiparametric Cox model with incomplete covariate information. These estimating equations also lead to easily computable, direct estimators of standard errors, and thus overcome some of the difficulty in obtaining variance estimators based on the EM algorithm in this setting. We also develop an easily implemented goodness‐of‐fit procedure for Cox's regression model including haplotype effects. Finally, we apply the procedures presented in this article to investigate possible haplotype effects of the PAF‐receptor on cardiovascular events in patients with coronary artery disease, and compare our results to those based on the EM algorithm.  相似文献   

15.
Regression trees allow to search for meaningful explanatory variables that have a non linear impact on the dependent variable. Often they are used when there are many covariates and one does not want to restrict attention to only few of them. To grow a tree at each stage one has to select a cut point for splitting a group into two subgroups. The basis for this are the maxima of the test statistics related to the possible splits due to every covariate. They or the resulting P-values are compared as measure of importance. If covariates have different numbers of missing values, ties, or even different measurement scales the covariates lead to different numbers of tests. Those with a higher number of tests have a greater chance to achieve a smaller P-value if they are not adjusted. This can lead to erroneous splits even if the P-values are looked at informally. There is some theoretical work by Miller and Siegmund (1982) and Lausen and Schumacher (1992) to give an adjustment rule. But the asymptotic is based on a continuum of split points and may not lead to a fair splitting rule when applied to smaller data sets or to covariates with only few different values. Here we develop an approach that allows determination of P-values for any number of splits. The only approximation that is used is the normal approximation of the test statistics. The starting point for this investigation has been a prospective study on the development of AIDS. This is presented here as the main application.  相似文献   

16.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   

17.
Fleming TR  Lin DY 《Biometrics》2000,56(4):971-983
The field of survival analysis emerged in the 20th century and experienced tremendous growth during the latter half of the century. The developments in this field that have had the most profound impact on clinical trials are the Kaplan-Meier (1958, Journal of the American Statistical Association 53, 457-481) method for estimating the survival function, the log-rank statistic (Mantel, 1966, Cancer Chemotherapy Report 50, 163-170) for comparing two survival distributions, and the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-220) proportional hazards model for quantifying the effects of covariates on the survival time. The counting-process martingale theory pioneered by Aalen (1975, Statistical inference for a family of counting processes, Ph.D. dissertation, University of California, Berkeley) provides a unified framework for studying the small- and large-sample properties of survival analysis statistics. Significant progress has been achieved and further developments are expected in many other areas, including the accelerated failure time model, multivariate failure time data, interval-censored data, dependent censoring, dynamic treatment regimes and causal inference, joint modeling of failure time and longitudinal data, and Baysian methods.  相似文献   

18.
Summary .   Recent interest in cancer research focuses on predicting patients' survival by investigating gene expression profiles based on microarray analysis. We propose a doubly penalized Buckley–James method for the semiparametric accelerated failure time model to relate high-dimensional genomic data to censored survival outcomes, which uses the elastic-net penalty that is a mixture of L 1- and L 2-norm penalties. Similar to the elastic-net method for a linear regression model with uncensored data, the proposed method performs automatic gene selection and parameter estimation, where highly correlated genes are able to be selected (or removed) together. The two-dimensional tuning parameter is determined by generalized crossvalidation. The proposed method is evaluated by simulations and applied to the Michigan squamous cell lung carcinoma study.  相似文献   

19.
Fei Liu  David Dunson  Fei Zou 《Biometrics》2011,67(2):504-512
Summary This article considers the problem of selecting predictors of time to an event from a high‐dimensional set of candidate predictors using data from multiple studies. As an alternative to the current multistage testing approaches, we propose to model the study‐to‐study heterogeneity explicitly using a hierarchical model to borrow strength. Our method incorporates censored data through an accelerated failure time model. Using a carefully formulated prior specification, we develop a fast approach to predictor selection and shrinkage estimation for high‐dimensional predictors. For model fitting, we develop a Monte Carlo expectation maximization (MC‐EM) algorithm to accommodate censored data. The proposed approach, which is related to the relevance vector machine (RVM), relies on maximum a posteriori estimation to rapidly obtain a sparse estimate. As for the typical RVM, there is an intrinsic thresholding property in which unimportant predictors tend to have their coefficients shrunk to zero. We compare our method with some commonly used procedures through simulation studies. We also illustrate the method using the gene expression barcode data from three breast cancer studies.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号