首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clustered interval-censored failure time data occur when the failure times of interest are clustered into small groups and known only to lie in certain intervals. A number of methods have been proposed for regression analysis of clustered failure time data, but most of them apply only to clustered right-censored data. In this paper, a sieve estimation procedure is proposed for fitting a Cox frailty model to clustered interval-censored failure time data. In particular, a two-step algorithm for parameter estimation is developed and the asymptotic properties of the resulting sieve maximum likelihood estimators are established. The finite sample properties of the proposed estimators are investigated through a simulation study and the method is illustrated by the data arising from a lymphatic filariasis study.  相似文献   

2.
Statistical analysis of longitudinal data often involves modeling treatment effects on clinically relevant longitudinal biomarkers since an initial event (the time origin). In some studies including preventive HIV vaccine efficacy trials, some participants have biomarkers measured starting at the time origin, whereas others have biomarkers measured starting later with the time origin unknown. The semiparametric additive time-varying coefficient model is investigated where the effects of some covariates vary nonparametrically with time while the effects of others remain constant. Weighted profile least squares estimators coupled with kernel smoothing are developed. The method uses the expectation maximization approach to deal with the censored time origin. The Kaplan–Meier estimator and other failure time regression models such as the Cox model can be utilized to estimate the distribution and the conditional distribution of left censored event time related to the censored time origin. Asymptotic properties of the parametric and nonparametric estimators and consistent asymptotic variance estimators are derived. A two-stage estimation procedure for choosing weight is proposed to improve estimation efficiency. Numerical simulations are conducted to examine finite sample properties of the proposed estimators. The simulation results show that the theory and methods work well. The efficiency gain of the two-stage estimation procedure depends on the distribution of the longitudinal error processes. The method is applied to analyze data from the Merck 023/HVTN 502 Step HIV vaccine study.  相似文献   

3.
Summary .  Recurrent event data analyses are usually conducted under the assumption that the censoring time is independent of the recurrent event process. In many applications the censoring time can be informative about the underlying recurrent event process, especially in situations where a correlated failure event could potentially terminate the observation of recurrent events. In this article, we consider a semiparametric model of recurrent event data that allows correlations between censoring times and recurrent event process via frailty. This flexible framework incorporates both time-dependent and time-independent covariates in the formulation, while leaving the distributions of frailty and censoring times unspecified. We propose a novel semiparametric inference procedure that depends on neither the frailty nor the censoring time distribution. Large sample properties of the regression parameter estimates and the estimated baseline cumulative intensity functions are studied. Numerical studies demonstrate that the proposed methodology performs well for realistic sample sizes. An analysis of hospitalization data for patients in an AIDS cohort study is presented to illustrate the proposed method.  相似文献   

4.
Summary As biological studies become more expensive to conduct, statistical methods that take advantage of existing auxiliary information about an expensive exposure variable are desirable in practice. Such methods should improve the study efficiency and increase the statistical power for a given number of assays. In this article, we consider an inference procedure for multivariate failure time with auxiliary covariate information. We propose an estimated pseudopartial likelihood estimator under the marginal hazard model framework and develop the asymptotic properties for the proposed estimator. We conduct simulation studies to evaluate the performance of the proposed method in practical situations and demonstrate the proposed method with a data set from the studies of left ventricular dysfunction ( SOLVD Investigators, 1991 , New England Journal of Medicine 325 , 293–302).  相似文献   

5.
How to select the active variables that have significant impact on the event of interest is a very important and meaningful problem in the statistical analysis of ultrahigh-dimensional data. In many applications, researchers often know that a certain set of covariates are active variables from some previous investigations and experiences. With the knowledge of the important prior knowledge of active variables, we propose a model-free conditional screening procedure for ultrahigh dimensional survival data based on conditional distance correlation. The proposed procedure can effectively detect the hidden active variables that are jointly important but are weakly correlated with the response. Moreover, it performs well when covariates are strongly correlated with each other. We establish the sure screening property and the ranking consistency of the proposed method and conduct extensive simulation studies, which suggests that the proposed procedure works well for practical situations. Then, we illustrate the new approach through a real dataset from the diffuse large-B-cell lymphoma study S1 .  相似文献   

6.
An iterative procedure for correcting stage-frequency data is described to allow for situations where the period during which a population is sampled begins after some individuals have entered stage 2 or ends before all individuals are dead. The reason for correcting data in this way is to enableKiritani andNakasuji's method for estimating stage-specific survival rates, with extensions proposed byManly (1976, 1977), to be used to analyse the data. The proposed procedure is illustrated on data obtained by sampling a population of the grasshopper Chorthippus brunneus passing through four instar stages to reach the adult stage.  相似文献   

7.
For the analysis of ultrahigh-dimensional data, the first step is often to perform screening and feature selection to effectively reduce the dimensionality while retaining all the active or relevant variables with high probability. For this, many methods have been developed under various frameworks but most of them only apply to complete data. In this paper, we consider an incomplete data situation, case II interval-censored failure time data, for which there seems to be no screening procedure. Basing on the idea of cumulative residual, a model-free or nonparametric method is developed and shown to have the sure independent screening property. In particular, the approach is shown to tend to rank the active variables above the inactive ones in terms of their association with the failure time of interest. A simulation study is conducted to demonstrate the usefulness of the proposed method and, in particular, indicates that it works well with general survival models and is capable of capturing the nonlinear covariates with interactions. Also the approach is applied to a childhood cancer survivor study that motivated this investigation.  相似文献   

8.
Sun L  Kim YJ  Sun J 《Biometrics》2004,60(3):637-643
Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics 49, 13-22; Sun et al., 1999, Biometrics 55, 909-914; Pan, 2001, Biometrics 57, 1245-1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study.  相似文献   

9.
This paper discusses two‐sample comparison in the case of interval‐censored failure time data. For the problem, one common approach is to employ some nonparametric test procedures, which usually give some p‐values but not a direct or exact quantitative measure of the survival or treatment difference of interest. In particular, these procedures cannot provide a hazard ratio estimate, which is commonly used to measure the difference between the two treatments or samples. For interval‐censored data, a few nonparametric test procedures have been developed, but it does not seem to exist as a procedure for hazard ratio estimation. Corresponding to this, we present two procedures for nonparametric estimation of the hazard ratio of the two samples for interval‐censored data situations. They are generalizations of the corresponding procedures for right‐censored failure time data. An extensive simulation study is conducted to evaluate the performance of the two procedures and indicates that they work reasonably well in practice. For illustration, they are applied to a set of interval‐censored data arising from a breast cancer study.  相似文献   

10.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   

11.
Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%.  相似文献   

12.
Recently, there has been an increased interest in isotopical labeling of peptides. Although there are several techniques allowing for a complete labeling of all carboxyl groups in peptides, regioselective labeling would be beneficial in many situations. Such labeling requires the use of 18O‐labeled Fmoc amino acids. We have designed a method for such labeling that is an improvement on a technique proposed earlier. The new procedure is suitable for microscale synthesis and could be used in peptide and proteomics laboratories. Although for the majority of tested amino acids our method gives good labeling efficiency, it is time consuming. Therefore, we have decided to use microwave‐assisted procedure. This approach resulted in reduction of reaction time to 15 min and increased reaction efficiency. Copyright © 2014 European Peptide Society and John Wiley & Sons, Ltd.  相似文献   

13.
Summary Nested case–control (NCC) design is a popular sampling method in large epidemiological studies for its cost effectiveness to investigate the temporal relationship of diseases with environmental exposures or biological precursors. Thomas' maximum partial likelihood estimator is commonly used to estimate the regression parameters in Cox's model for NCC data. In this article, we consider a situation in which failure/censoring information and some crude covariates are available for the entire cohort in addition to NCC data and propose an improved estimator that is asymptotically more efficient than Thomas' estimator. We adopt a projection approach that, heretofore, has only been employed in situations of random validation sampling and show that it can be well adapted to NCC designs where the sampling scheme is a dynamic process and is not independent for controls. Under certain conditions, consistency and asymptotic normality of the proposed estimator are established and a consistent variance estimator is also developed. Furthermore, a simplified approximate estimator is proposed when the disease is rare. Extensive simulations are conducted to evaluate the finite sample performance of our proposed estimators and to compare the efficiency with Thomas' estimator and other competing estimators. Moreover, sensitivity analyses are conducted to demonstrate the behavior of the proposed estimator when model assumptions are violated, and we find that the biases are reasonably small in realistic situations. We further demonstrate the proposed method with data from studies on Wilms' tumor.  相似文献   

14.
Censored quantile regression models, which offer great flexibility in assessing covariate effects on event times, have attracted considerable research interest. In this study, we consider flexible estimation and inference procedures for competing risks quantile regression, which not only provides meaningful interpretations by using cumulative incidence quantiles but also extends the conventional accelerated failure time model by relaxing some of the stringent model assumptions, such as global linearity and unconditional independence. Current method for censored quantile regressions often involves the minimization of the L1‐type convex function or solving the nonsmoothed estimating equations. This approach could lead to multiple roots in practical settings, particularly with multiple covariates. Moreover, variance estimation involves an unknown error distribution and most methods rely on computationally intensive resampling techniques such as bootstrapping. We consider the induced smoothing procedure for censored quantile regressions to the competing risks setting. The proposed procedure permits the fast and accurate computation of quantile regression parameter estimates and standard variances by using conventional numerical methods such as the Newton–Raphson algorithm. Numerical studies show that the proposed estimators perform well and the resulting inference is reliable in practical settings. The method is finally applied to data from a soft tissue sarcoma study.  相似文献   

15.
An accelerated failure time (AFT) model assuming a log-linear relationship between failure time and a set of covariates can be either parametric or semiparametric, depending on the distributional assumption for the error term. Both classes of AFT models have been popular in the analysis of censored failure time data. The semiparametric AFT model is more flexible and robust to departures from the distributional assumption than its parametric counterpart. However, the semiparametric AFT model is subject to producing biased results for estimating any quantities involving an intercept. Estimating an intercept requires a separate procedure. Moreover, a consistent estimation of the intercept requires stringent conditions. Thus, essential quantities such as mean failure times might not be reliably estimated using semiparametric AFT models, which can be naturally done in the framework of parametric AFT models. Meanwhile, parametric AFT models can be severely impaired by misspecifications. To overcome this, we propose a new type of the AFT model using a nonparametric Gaussian-scale mixture distribution. We also provide feasible algorithms to estimate the parameters and mixing distribution. The finite sample properties of the proposed estimators are investigated via an extensive stimulation study. The proposed estimators are illustrated using a real dataset.  相似文献   

16.
The analysis of failure times in the presence of competing risks.   总被引:15,自引:0,他引:15  
Distinct problems in the analysis of failure times with competing causes of failure include the estimation of treatment or exposure effects on specific failure types, the study of interrelations among failure types, and the estimation of failure rates for some causes given the removal of certain other failure types. The usual formation of these problems is in terms of conceptual or latent failure times for each failure type. This approach is criticized on the basis of unwarranted assumptions, lack of physical interpretation and identifiability problems. An alternative approach utilizing cause-specific hazard functions for observable quantities, including time-dependent covariates, is proposed. Cause-specific hazard functions are shown to be the basic estimable quantities in the competing risks framework. A method, involving the estimation of parameters that relate time-dependent risk indicators for some causes to cause-specific hazard functions for other causes, is proposed for the study of interrelations among failure types. Further, it is argued that the problem of estimation of failure rates under the removal of certain causes is not well posed until a mechanism for cause removal is specified. Following such a specification, one will sometimes be in a position to make sensible extrapolations from available data to situations involving cause removal. A clinical program in bone marrow transplantation for leukemia provides a setting for discussion and illustration of each of these ideas. Failure due to censoring in a survivorship study leads to further discussion.  相似文献   

17.
Bondell HD  Reich BJ 《Biometrics》2008,64(1):115-123
Summary .   Variable selection can be challenging, particularly in situations with a large number of predictors with possibly high correlations, such as gene expression data. In this article, a new method called the OSCAR (octagonal shrinkage and clustering algorithm for regression) is proposed to simultaneously select variables while grouping them into predictive clusters. In addition to improving prediction accuracy and interpretation, these resulting groups can then be investigated further to discover what contributes to the group having a similar behavior. The technique is based on penalized least squares with a geometrically intuitive penalty function that shrinks some coefficients to exactly zero. Additionally, this penalty yields exact equality of some coefficients, encouraging correlated predictors that have a similar effect on the response to form predictive clusters represented by a single coefficient. The proposed procedure is shown to compare favorably to the existing shrinkage and variable selection techniques in terms of both prediction error and model complexity, while yielding the additional grouping information.  相似文献   

18.
Kim YJ 《Biometrics》2006,62(2):458-464
In doubly censored failure time data, the survival time of interest is defined as the elapsed time between an initial event and a subsequent event, and the occurrences of both events cannot be observed exactly. Instead, only right- or interval-censored observations on the occurrence times are available. For the analysis of such data, a number of methods have been proposed under the assumption that the survival time of interest is independent of the occurrence time of the initial event. This article investigates a different situation where the independence may not be true with the focus on regression analysis of doubly censored data. Cox frailty models are applied to describe the effects of covariates and an EM algorithm is developed for estimation. Simulation studies are performed to investigate finite sample properties of the proposed method and an illustrative example from an acquired immune deficiency syndrome (AIDS) cohort study is provided.  相似文献   

19.
A method is proposed for reconstructing the time and age dependence of incidence rates from successive age-prevalence cross sections taken from the sentinel surveys of irreversible diseases when there is an important difference in mortality between the infected and susceptible subpopulations. The prevalence information at different time-age points is used to generate a surface; the time-age variations along the life line profiles of this surface and the difference in mortality rates are used to reconstruct the time and age dependence of the incidence rate. Past attempts were based on specified parametric forms for the incidence or on the hypothesis of time-invariant forms for the age-prevalence cross sections. The proposed method makes no such assumptions and is thus capable of coping with rapidly evolving prevalence situations. In the simulations carried out, it is found to be resilient to important random noise components added to a prescribed incidence rate input. The method is also tested on a real data set of successive HIV age-prevalence cross sections from Burundi coupled to differential mortality data on HIV(+) and HIV(-) individuals. The often-made assumption that the incidence rate can be written as the product of a calendar time component and an age component is also examined. In this case, a pooling procedure is proposed to estimate the time and the age profiles of the incidence rate using the reconstructed incidence rates at all time-age points.  相似文献   

20.
Land-cover characteristics have been considered in many ecological studies. Methods to identify these characteristics by using remotely sensed time series data have previously been proposed. However, these methods often have a mathematical basis, and more effort is required to better illustrate the ecological meanings of land-cover characteristics. In this study, a method for identifying these characteristics was proposed from the ecological perspective of sustained vegetation growth trend. Improvement was also made in parameter extraction, inspired by a method used for determining the hyperspectral red edge position. Five land-cover types were chosen to represent various ecosystem growth patterns and MODIS time series data were adopted for analysis. The results show that the extracted parameters can reflect ecosystem growth patterns and portray ecosystem traits such as vegetation growth strategy and ecosystem growth situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号