首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
Cure models are used in time-to-event analysis when not all individuals are expected to experience the event of interest, or when the survival of the considered individuals reaches the same level as the general population. These scenarios correspond to a plateau in the survival and relative survival function, respectively. The main parameters of interest in cure models are the proportion of individuals who are cured, termed the cure proportion, and the survival function of the uncured individuals. Although numerous cure models have been proposed in the statistical literature, there is no consensus on how to formulate these. We introduce a general parametric formulation of mixture cure models and a new class of cure models, termed latent cure models, together with a general estimation framework and software, which enable fitting of a wide range of different models. Through simulations, we assess the statistical properties of the models with respect to the cure proportion and the survival of the uncured individuals. Finally, we illustrate the models using survival data on colon cancer, which typically display a plateau in the relative survival. As demonstrated in the simulations, mixture cure models which are not guaranteed to be constant after a finite time point, tend to produce accurate estimates of the cure proportion and the survival of the uncured. However, these models are very unstable in certain cases due to identifiability issues, whereas LC models generally provide stable results at the price of more biased estimates.  相似文献   

2.
    
Recurrent event data are commonly encountered in biomedical studies. In many situations, they are subject to an informative terminal event, for example, death. Joint modeling of recurrent and terminal events has attracted substantial recent research interests. On the other hand, there may exist a large number of covariates in such data. How to conduct variable selection for joint frailty proportional hazards models has become a challenge in practical data analysis. We tackle this issue on the basis of the “minimum approximated information criterion” method. The proposed method can be conveniently implemented in SAS Proc NLMIXED for commonly used frailty distributions. Its finite-sample behavior is evaluated through simulation studies. We apply the proposed method to model recurrent opportunistic diseases in the presence of death in an AIDS study.  相似文献   

3.
Most statistical methods for censored survival data assume there is no dependence between the lifetime and censoring mechanisms, an assumption which is often doubtful in practice. In this paper we study a parametric model which allows for dependence in terms of a parameter delta and a bias function B(t, theta). We propose a sensitivity analysis on the estimate of the parameter of interest for small values of delta. This parameter measures the dependence between the lifetime and the censoring mechanisms. Its size can be interpreted in terms of a correlation coefficient between the two mechanisms. A medical example suggests that even a small degree of dependence between the failure and censoring processes can have a noticeable effect on the analysis.  相似文献   

4.
    
  相似文献   

5.
Siannis F 《Biometrics》2004,60(3):704-714
In this article, we explore the use of a parametric model (for analyzing survival data) which is defined to allow sensitivity analysis for the presence of informative censoring. The dependence between the failure and the censoring processes is expressed through a parameter delta and a general bias function B(t, theta). We calculate the expectation of the potential bias due to informative censoring, which is an overall measure of how misleading our results might be if censoring is actually nonignorable. Bounds are also calculated for quantities of interest, e.g., parameter of the distribution of the failure process, which do not depend on the choice of the bias function for fixed delta. An application that relates to systematic lupus erythematosus data illustrates how additional information can result in reducing the uncertainty on estimates of the location parameter. Sensitivity analysis on a relative risk parameter is also explored.  相似文献   

6.
7.
    
This study explores how linear enamel hypoplasia (LEH) affects mortality in the village of Tirup (A.D. 1150-1350), Denmark. Data consist of information on 583 skeletons aged 1 year or more. Three partly overlapping subsamples were defined. (1) 104 skeletons of young children aged 1-6 years and 120 skeletons of adults giving information on LEH. (2) 458 skeletons aged 6 years or more. (3) 109 adult skeletons (aged 20 years or more) that provided transition analysis age estimates, sex assessments, and LEH information. Of the 109 skeletons in Subsample 3, 60 had no and 49 had at least one LEH. In Subsample 1, it was found that the case fatality rate for episodes potentially leading to LEH dropped from over 0.5 in 1-year olds to around 0.1 in 3-5-year olds. Only models with heterogeneity of frailty could describe late childhood and adolescent mortality. Further, it was shown that only a model with continuously varying frailty preserved heterogeneity to adulthood. Among young adult females and males in all adult ages, people with LEH experienced a higher mortality than people without it. Among males, the mortality rate ratio (MRR) was 2.28. The analyses indicate that the MRR gives an unbiased estimate for the extra risk of dying for adult males with LEH. The values of the case fatality rates for young children might be slightly biased upward because of a higher than average number of older children and adolescents dying with LEH.  相似文献   

8.
    
We provide a definitive guide to parameter redundancy in mark‐recovery models, indicating, for a wide range of models, in which all the parameters are estimable, and in which models they are not. For these parameter‐redundant models, we identify the parameter combinations that can be estimated. Simple, general results are obtained, which hold irrespective of the duration of the studies. We also examine the effect real data have on whether or not models are parameter redundant, and show that results can be robust even with very sparse data. Covariates, as well as time‐ or age‐varying trends, can be added to models to overcome redundancy problems. We show how to determine, without further calculation, whether or not parameter‐redundant models are still parameter redundant after the addition of covariates or trends.  相似文献   

9.
    
The hazard ratio (HR) is often reported as the main causal effect when studying survival data. Despite its popularity, the HR suffers from an unclear causal interpretation. As already pointed out in the literature, there is a built-in selection bias in the HR, because similarly to the truncation by death problem, the HR conditions on post-treatment survival. A recently proposed alternative, inspired by the Survivor Average Causal Effect, is the causal HR, defined as the ratio between hazards across treatment groups among the study participants that would have survived regardless of their treatment assignment. We discuss the challenge in identifying the causal HR and present a sensitivity analysis identification approach in randomized controlled trials utilizing a working frailty model. We further extend our framework to adjust for potential confounders using inverse probability of treatment weighting. We present a Cox-based and a flexible non-parametric kernel-based estimation under right censoring. We study the finite-sample properties of the proposed estimation methods through simulations. We illustrate the utility of our framework using two real-data examples.  相似文献   

10.
This paper considers the implications of a structural identifiability analysis on a series of fundamental three-compartment epidemic model structures, derived around the general SIR (susceptible–infective–recovered) framework. The models represent various forms of incomplete immunity acquired through natural infection, or from administration of a birth targeted vaccination programme. It is shown that the addition of a vaccination campaign has a negative effect on the structural identifiability of all considered models. In particular, the actual proportion of vaccination coverage achieved, an essential parameter, cannot be uniquely estimated from even ideal prevalence data.  相似文献   

11.
In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.  相似文献   

12.
    
Time‐dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time‐fixed covariates. This study briefly recalls the different types of time‐dependent covariates, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time‐dependent covariates are to be included in the modeling process, then it is still possible to estimate cause‐specific hazards but prediction of the cumulative incidences and survival probabilities based on these is no longer feasible. This article aims at providing some possible strategies for dealing with these prediction problems. In a multi‐state framework, a first approach uses internal covariates to define additional (intermediate) transient states in the competing risks model. Another approach is to apply the landmark analysis as described by van Houwelingen [Scandinavian Journal of Statistics 2007, 34 , 70–85] in order to study cumulative incidences at different subintervals of the entire study period. The final strategy is to extend the competing risks model by considering all the possible combinations between internal covariate levels and cause‐specific events as final states. In all of those proposals, it is possible to estimate the changes/differences of the cumulative risks associated with simple internal covariates. An illustrative example based on bone marrow transplant data is presented in order to compare the different methods.  相似文献   

13.
A mathematical multi-cell model for the in vitro kinetics of the anti-cancer agent topotecan (TPT) following administration into a culture medium containing a population of human breast cancer cells (MCF-7 cell line) is described. This non-linear compartmental model is an extension of an earlier single-cell type model and has been validated using experimental data obtained using two-photon laser scanning microscopy (TPLSM). A structural identifiability analysis is performed prior to parameter estimation to test whether the unknown parameters within the model are uniquely determined by the model outputs. The full model has 43 compartments, with 107 unknown parameters, and it was found that the structural identifiability result could not be established even when using the latest version of the symbolic computation software Mathematica. However, by assuming that a priori knowledge is available for certain parameters, it was possible to reduce the number of parameters to 81, and it was found that this (Stage Two) model was globally (uniquely) structurally identifiable. The identifiability analysis demonstrated how valuable symbolic computation is in this context, as the analysis is far too lengthy and difficult to be performed by hand.  相似文献   

14.
    
This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right‐censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit‐of‐detection. For randomly censored covariates, an often‐used method is the inefficient complete‐case analysis (CCA) which consists in deleting censored observations in the data analysis. When censoring is not completely independent, the CCA leads to biased and spurious results. Methods for missing covariate data, including type I and type II covariate censoring as well as limit‐of‐detection do not readily apply due to the fundamentally different nature of randomly censored covariates. We develop a novel method for censored covariates using a conditional mean imputation based on either Kaplan–Meier estimates or a Cox proportional hazards model to estimate the effects of these covariates on a time‐to‐event outcome. We evaluate the performance of the proposed method through simulation studies and show that it provides good bias reduction and statistical efficiency. Finally, we illustrate the method using data from the Framingham Heart Study to assess the relationship between offspring and parental age of onset of cardiovascular events.  相似文献   

15.
    
  相似文献   

16.
17.
    
Dissecting the genetic basis of phenotypic variation in natural populations is a long‐standing goal in evolutionary biology. One open question is whether quantitative traits are determined only by large numbers of genes with small effects, or whether variation also exists in large‐effect loci. We conducted genomewide association analyses of forehead patch size (a sexually selected trait) on 81 whole‐genome‐resequenced male collared flycatchers with extreme phenotypes, and on 415 males sampled independent of patch size and genotyped with a 50K SNP chip. No SNPs were genomewide statistically significantly associated with patch size. Simulation‐based power analyses suggest that the power to detect large‐effect loci responsible for 10% of phenotypic variance was <0.5 in the genome resequencing analysis, and <0.1 in the SNP chip analysis. Reducing the recombination by two‐thirds relative to collared flycatchers modestly increased power. Tripling sample size increased power to >0.8 for resequencing of extreme phenotypes (N = 243), but power remained <0.2 for the 50K SNP chip analysis (N = 1245). At least 1 million SNPs were necessary to achieve power >0.8 when analysing 415 randomly sampled phenotypes. However, power of the 50K SNP chip to detect large‐effect loci was nearly 0.8 in simulations with a small effective population size of 1500. These results suggest that reliably detecting large‐effect trait loci in large natural populations will often require thousands of individuals and near complete sampling of the genome. Encouragingly, far fewer individuals and loci will often be sufficient to reliably detect large‐effect loci in small populations with widespread strong linkage disequilibrium.  相似文献   

18.
    
We present a method to fit a mixed effects Cox model with interval‐censored data. Our proposal is based on a multiple imputation approach that uses the truncated Weibull distribution to replace the interval‐censored data by imputed survival times and then uses established mixed effects Cox methods for right‐censored data. Interval‐censored data were encountered in a database corresponding to a recompilation of retrospective data from eight analytical treatment interruption (ATI) studies in 158 human immunodeficiency virus (HIV) positive combination antiretroviral treatment (cART) suppressed individuals. The main variable of interest is the time to viral rebound, which is defined as the increase of serum viral load (VL) to detectable levels in a patient with previously undetectable VL, as a consequence of the interruption of cART. Another aspect of interest of the analysis is to consider the fact that the data come from different studies based on different grounds and that we have several assessments on the same patient. In order to handle this extra variability, we frame the problem into a mixed effects Cox model that considers a random intercept per subject as well as correlated random intercept and slope for pre‐cART VL per study. Our procedure has been implemented in R using two packages: truncdist and coxme , and can be applied to any data set that presents both interval‐censored survival times and a grouped data structure that could be treated as a random effect in a regression model. The properties of the parameter estimators obtained with our proposed method are addressed through a simulation study.  相似文献   

19.
In this paper we identify biologically relevant families of models whose structural identifiability analysis could not be performed with available techniques directly. The models considered come from both the immunological and epidemiological literature.  相似文献   

20.
In this paper, it is shown that the SIR epidemic model, with the force of infection subject to seasonal variation, and a proportion of either the prevalence or the incidence measured, is unidentifiable unless certain key system parameters are known, or measurable. This means that an uncountable number of different parameter vectors can, theoretically, give rise to the same idealised output data. Any subsequent parameter estimation from real data must be viewed with little confidence as a result. The approach adopted for the structural identifiability analysis utilises the existence of an infinitely differentiable transformation that connects the state trajectories corresponding to parameter vectors that give rise to identical output data. When this approach proves computationally intractable, it is possible to use the converse idea that the existence of a coordinate transformation between states for particular parameter vectors implies indistinguishability between these vectors from the corresponding model outputs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号