首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.

Summary

We consider a functional linear Cox regression model for characterizing the association between time‐to‐event data and a set of functional and scalar predictors. The functional linear Cox regression model incorporates a functional principal component analysis for modeling the functional predictors and a high‐dimensional Cox regression model to characterize the joint effects of both functional and scalar predictors on the time‐to‐event data. We develop an algorithm to calculate the maximum approximate partial likelihood estimates of unknown finite and infinite dimensional parameters. We also systematically investigate the rate of convergence of the maximum approximate partial likelihood estimates and a score test statistic for testing the nullity of the slope function associated with the functional predictors. We demonstrate our estimation and testing procedures by using simulations and the analysis of the Alzheimer's Disease Neuroimaging Initiative (ADNI) data. Our real data analyses show that high‐dimensional hippocampus surface data may be an important marker for predicting time to conversion to Alzheimer's disease. Data used in the preparation of this article were obtained from the ADNI database ( adni.loni.usc.edu ).  相似文献   

2.
Jing Qin  Yu Shen 《Biometrics》2010,66(2):382-392
Summary Length‐biased time‐to‐event data are commonly encountered in applications ranging from epidemiological cohort studies or cancer prevention trials to studies of labor economy. A longstanding statistical problem is how to assess the association of risk factors with survival in the target population given the observed length‐biased data. In this article, we demonstrate how to estimate these effects under the semiparametric Cox proportional hazards model. The structure of the Cox model is changed under length‐biased sampling in general. Although the existing partial likelihood approach for left‐truncated data can be used to estimate covariate effects, it may not be efficient for analyzing length‐biased data. We propose two estimating equation approaches for estimating the covariate coefficients under the Cox model. We use the modern stochastic process and martingale theory to develop the asymptotic properties of the estimators. We evaluate the empirical performance and efficiency of the two methods through extensive simulation studies. We use data from a dementia study to illustrate the proposed methodology, and demonstrate the computational algorithms for point estimates, which can be directly linked to the existing functions in S‐PLUS or R .  相似文献   

3.
Tractable space‐time point processes models are needed in various fields. For example in weed science for gaining biological knowledge, for prediction of weed development in order to optimize local treatments with herbicides or in epidemiology for prediction of the risk of a disease. Motivated by the spatio‐temporal point patterns for two weed species, we propose a spatio‐temporal Cox model with intensity based on gamma random fields. The model is an extension of Neyman–Scott and shot‐noise Cox processes to the space‐time domain and it allows spatial and temporal inhomogeneity. We use the weed example to give a first intuitive interpretation of the model and then show how the model is constructed more rigorously and how to estimate the parameters. The weed data are analysed using the proposed model, and both spatially and temporally the model shows a good fit to the data using classical goodness‐of‐fit tests.  相似文献   

4.
Summary In this article, we propose a positive stable shared frailty Cox model for clustered failure time data where the frailty distribution varies with cluster‐level covariates. The proposed model accounts for covariate‐dependent intracluster correlation and permits both conditional and marginal inferences. We obtain marginal inference directly from a marginal model, then use a stratified Cox‐type pseudo‐partial likelihood approach to estimate the regression coefficient for the frailty parameter. The proposed estimators are consistent and asymptotically normal and a consistent estimator of the covariance matrix is provided. Simulation studies show that the proposed estimation procedure is appropriate for practical use with a realistic number of clusters. Finally, we present an application of the proposed method to kidney transplantation data from the Scientific Registry of Transplant Recipients.  相似文献   

5.
Zhiguo Li  Peter Gilbert  Bin Nan 《Biometrics》2008,64(4):1247-1255
Summary Grouped failure time data arise often in HIV studies. In a recent preventive HIV vaccine efficacy trial, immune responses generated by the vaccine were measured from a case–cohort sample of vaccine recipients, who were subsequently evaluated for the study endpoint of HIV infection at prespecified follow‐up visits. Gilbert et al. (2005, Journal of Infectious Diseases 191 , 666–677) and Forthal et al. (2007, Journal of Immunology 178, 6596–6603) analyzed the association between the immune responses and HIV incidence with a Cox proportional hazards model, treating the HIV infection diagnosis time as a right‐censored random variable. The data, however, are of the form of grouped failure time data with case–cohort covariate sampling, and we propose an inverse selection probability‐weighted likelihood method for fitting the Cox model to these data. The method allows covariates to be time dependent, and uses multiple imputation to accommodate covariate data that are missing at random. We establish asymptotic properties of the proposed estimators, and present simulation results showing their good finite sample performance. We apply the method to the HIV vaccine trial data, showing that higher antibody levels are associated with a lower hazard of HIV infection.  相似文献   

6.
Fully Bayesian methods for Cox models specify a model for the baseline hazard function. Parametric approaches generally provide monotone estimations. Semi‐parametric choices allow for more flexible patterns but they can suffer from overfitting and instability. Regularization methods through prior distributions with correlated structures usually give reasonable answers to these types of situations. We discuss Bayesian regularization for Cox survival models defined via flexible baseline hazards specified by a mixture of piecewise constant functions and by a cubic B‐spline function. For those “semi‐parametric” proposals, different prior scenarios ranging from prior independence to particular correlated structures are discussed in a real study with microvirulence data and in an extensive simulation scenario that includes different data sample and time axis partition sizes in order to capture risk variations. The posterior distribution of the parameters was approximated using Markov chain Monte Carlo methods. Model selection was performed in accordance with the deviance information criteria and the log pseudo‐marginal likelihood. The results obtained reveal that, in general, Cox models present great robustness in covariate effects and survival estimates independent of the baseline hazard specification. In relation to the “semi‐parametric” baseline hazard specification, the B‐splines hazard function is less dependent on the regularization process than the piecewise specification because it demands a smaller time axis partition to estimate a similar behavior of the risk.  相似文献   

7.
This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right‐censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit‐of‐detection. For randomly censored covariates, an often‐used method is the inefficient complete‐case analysis (CCA) which consists in deleting censored observations in the data analysis. When censoring is not completely independent, the CCA leads to biased and spurious results. Methods for missing covariate data, including type I and type II covariate censoring as well as limit‐of‐detection do not readily apply due to the fundamentally different nature of randomly censored covariates. We develop a novel method for censored covariates using a conditional mean imputation based on either Kaplan–Meier estimates or a Cox proportional hazards model to estimate the effects of these covariates on a time‐to‐event outcome. We evaluate the performance of the proposed method through simulation studies and show that it provides good bias reduction and statistical efficiency. Finally, we illustrate the method using data from the Framingham Heart Study to assess the relationship between offspring and parental age of onset of cardiovascular events.  相似文献   

8.
We present a method to fit a mixed effects Cox model with interval‐censored data. Our proposal is based on a multiple imputation approach that uses the truncated Weibull distribution to replace the interval‐censored data by imputed survival times and then uses established mixed effects Cox methods for right‐censored data. Interval‐censored data were encountered in a database corresponding to a recompilation of retrospective data from eight analytical treatment interruption (ATI) studies in 158 human immunodeficiency virus (HIV) positive combination antiretroviral treatment (cART) suppressed individuals. The main variable of interest is the time to viral rebound, which is defined as the increase of serum viral load (VL) to detectable levels in a patient with previously undetectable VL, as a consequence of the interruption of cART. Another aspect of interest of the analysis is to consider the fact that the data come from different studies based on different grounds and that we have several assessments on the same patient. In order to handle this extra variability, we frame the problem into a mixed effects Cox model that considers a random intercept per subject as well as correlated random intercept and slope for pre‐cART VL per study. Our procedure has been implemented in R using two packages: truncdist and coxme , and can be applied to any data set that presents both interval‐censored survival times and a grouped data structure that could be treated as a random effect in a regression model. The properties of the parameter estimators obtained with our proposed method are addressed through a simulation study.  相似文献   

9.
In some large clinical studies, it may be impractical to perform the physical examination to every subject at his/her last monitoring time in order to diagnose the occurrence of the event of interest. This gives rise to survival data with missing censoring indicators where the probability of missing may depend on time of last monitoring and some covariates. We present a fully Bayesian semi‐parametric method for such survival data to estimate regression parameters of the proportional hazards model of Cox. Theoretical investigation and simulation studies show that our method performs better than competing methods. We apply the proposed method to analyze the survival data with missing censoring indicators from the Orofacial Pain: Prospective Evaluation and Risk Assessment study.  相似文献   

10.
In many clinical trials, multiple time‐to‐event endpoints including the primary endpoint (e.g., time to death) and secondary endpoints (e.g., progression‐related endpoints) are commonly used to determine treatment efficacy. These endpoints are often biologically related. This work is motivated by a study of bone marrow transplant (BMT) for leukemia patients, who may experience the acute graft‐versus‐host disease (GVHD), relapse of leukemia, and death after an allogeneic BMT. The acute GVHD is associated with the relapse free survival, and both the acute GVHD and relapse of leukemia are intermediate nonterminal events subject to dependent censoring by the informative terminal event death, but not vice versa, giving rise to survival data that are subject to two sets of semi‐competing risks. It is important to assess the impacts of prognostic factors on these three time‐to‐event endpoints. We propose a novel statistical approach that jointly models such data via a pair of copulas to account for multiple dependence structures, while the marginal distribution of each endpoint is formulated by a Cox proportional hazards model. We develop an estimation procedure based on pseudo‐likelihood and carry out simulation studies to examine the performance of the proposed method in finite samples. The practical utility of the proposed method is further illustrated with data from the motivating example.  相似文献   

11.
Realistic power calculations for large cohort studies and nested case control studies are essential for successfully answering important and complex research questions in epidemiology and clinical medicine. For this, we provide a methodical framework for general realistic power calculations via simulations that we put into practice by means of an R‐based template. We consider staggered recruitment and individual hazard rates, competing risks, interaction effects, and the misclassification of covariates. The study cohort is assembled with respect to given age‐, gender‐, and community distributions. Nested case‐control analyses with a varying number of controls enable comparisons of power with a full cohort analysis. Time‐to‐event generation under competing risks, including delayed study‐entry times, is realized on the basis of a six‐state Markov model. Incidence rates, prevalence of risk factors and prefixed hazard ratios allow for the assignment of age‐dependent transition rates given in the form of Cox models. These provide the basis for a central simulation‐algorithm, which is used for the generation of sample paths of the underlying time‐inhomogeneous Markov processes. With the inclusion of frailty terms into the Cox models the Markov property is specifically biased. An “individual Markov process given frailty” creates some unobserved heterogeneity between individuals. Different left‐truncation‐ and right‐censoring patterns call for the use of Cox models for data analysis. p‐values are recorded over repeated simulation runs to allow for the desired power calculations. For illustration, we consider scenarios with a “testing” character as well as realistic scenarios. This enables the validation of a correct implementation of theoretical concepts and concrete sample size recommendations against an actual epidemiological background, here given with possible substudy designs within the German National Cohort.  相似文献   

12.
When analyzing time‐to‐event cohort data, two different ways of choosing a time scale have been discussed in the literature: time‐on‐study or age at onset of disease. One advantage of choosing the latter is interpretability of the hazard ratio as a function of age. To handle the analysis of age at onset in a principled manner, we present an analysis of the Cox Proportional Hazards model with time‐varying coefficient for left‐truncated and right‐censored data. In the analysis of Northern Manhattan Study (NOMAS) with age at onset of stroke as outcome, we demonstrate that well‐established risk factors may be important only around a certain age span and less established risk factors can have a strong effect in a certain age span.  相似文献   

13.
Summary The standard estimator for the cause‐specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is a weighted empirical cumulative distribution function and the other a product‐limit estimator. This equivalence suggests an alternative view of the analysis of time‐to‐event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause‐specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause‐specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time‐dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non‐AIDS related cumulative mortality.  相似文献   

14.
Colorectal cancer (CRC) is one of the most commonly diagnosed cancers with an estimated 1.8 million new cases worldwide and associated with high mortality rates of 881 000 CRC‐related deaths in 2018. Screening programs and new therapies have only marginally improved the survival of CRC patients. Immune‐related genes (IRGs) have attracted attention in recent years as therapeutic targets. The aim of this study was to identify an immune‐related prognostic signature for CRC. To this end, we combined gene expression and clinical data from the CRC data sets of The Cancer Genome Atlas (TCGA) into an integrated immune landscape profile. We identified a total of 476 IRGs that were differentially expressed in CRC vs normal tissues, of which 18 were survival related according to univariate Cox analysis. Stepwise multivariate Cox proportional hazards analysis established an immune‐related prognostic signature consisting of SLC10A2, FGF2, CCL28, NDRG1, ESM1, UCN, UTS2 and TRDC. The predictive ability of this signature for 3‐ and 5‐year overall survival was determined using receiver operating characteristics (ROC), and the respective areas under the curve (AUC) were 79.2% and 76.6%. The signature showed moderate predictive accuracy in the validation and GSE38832 data sets as well. Furthermore, the 8‐IRG signature correlated significantly with tumour stage, invasion, lymph node metastasis and distant metastasis by univariate Cox analysis, and was established an independent prognostic factor by multivariate Cox regression analysis for CRC. Gene set enrichment analysis (GSEA) revealed a relationship between the IRG prognostic signature and various biological pathways. Focal adhesions and ECM‐receptor interactions were positively correlated with the risk scores, while cytosolic DNA sensing and metabolism‐related pathways were negatively correlated. Finally, the bioinformatics results were validated by real‐time RT?qPCR. In conclusion, we identified and validated a novel, immune‐related prognostic signature for patients with CRC, and this signature reflects the dysregulated tumour immune microenvironment and has a potential for better CRC patient management.  相似文献   

15.
Summary Identification of novel biomarkers for risk assessment is important for both effective disease prevention and optimal treatment recommendation. Discovery relies on the precious yet limited resource of stored biological samples from large prospective cohort studies. Case‐cohort sampling design provides a cost‐effective tool in the context of biomarker evaluation, especially when the clinical condition of interest is rare. Existing statistical methods focus on making efficient inference on relative hazard parameters from the Cox regression model. Drawing on recent theoretical development on the weighted likelihood for semiparametric models under two‐phase studies ( Breslow and Wellner, 2007 ), we propose statistical methods to evaluate accuracy and predictiveness of a risk prediction biomarker, with censored time‐to‐event outcome under stratified case‐cohort sampling. We consider nonparametric methods and a semiparametric method. We derive large sample properties of proposed estimators and evaluate their finite sample performance using numerical studies. We illustrate new procedures using data from Framingham Offspring Study to evaluate the accuracy of a recently developed risk score incorporating biomarker information for predicting cardiovascular disease.  相似文献   

16.
Copper (Cu) is an essential micronutrient that functions as a cofactor in several important enzymes, such as respiratory heme‐copper oxygen reductases. Yet, Cu is also toxic and therefore cells engage a highly coordinated Cu uptake and delivery system to prevent the accumulation of toxic Cu concentrations. In this study, we analyzed Cu delivery to the cbb3‐type cytochrome c oxidase (cbb3‐Cox) of Rhodobacter capsulatus. We identified the PCuAC‐like periplasmic chaperone PccA and analyzed its contribution to cbb3‐Cox assembly. Our data demonstrate that PccA is a Cu‐binding protein with a preference for Cu(I), which is required for efficient cbb3‐Cox assembly, in particular, at low Cu concentrations. By using in vivo and in vitro cross‐linking, we show that PccA forms a complex with the Sco1‐homologue SenC. This complex is stabilized in the absence of the cbb3‐Cox‐specific assembly factors CcoGHIS. In cells lacking SenC, the cytoplasmic Cu content is significantly increased, but the simultaneous absence of PccA prevents this Cu accumulation. These data demonstrate that the interplay between PccA and SenC not only is required for Cu delivery during cbb3‐Cox assembly but also regulates Cu homeostasis in R. capsulatus.  相似文献   

17.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

18.
This paper presents an extension of the joint modeling strategy for the case of multiple longitudinal outcomes and repeated infections of different types over time, motivated by postkidney transplantation data. Our model comprises two parts linked by shared latent terms. On the one hand is a multivariate mixed linear model with random effects, where a low‐rank thin‐plate spline function is incorporated to collect the nonlinear behavior of the different profiles over time. On the other hand is an infection‐specific Cox model, where the dependence between different types of infections and the related times of infection is through a random effect associated with each infection type to catch the within dependence and a shared frailty parameter to capture the dependence between infection types. We implemented the parameterization used in joint models which uses the fitted longitudinal measurements as time‐dependent covariates in a relative risk model. Our proposed model was implemented in OpenBUGS using the MCMC approach.  相似文献   

19.
Marginal structural models (MSMs) have been proposed for estimating a treatment's effect, in the presence of time‐dependent confounding. We aimed to evaluate the performance of the Cox MSM in the presence of missing data and to explore methods to adjust for missingness. We simulated data with a continuous time‐dependent confounder and a binary treatment. We explored two classes of missing data: (i) missed visits, which resemble clinical cohort studies; (ii) missing confounder's values, which correspond to interval cohort studies. Missing data were generated under various mechanisms. In the first class, the source of the bias was the extreme treatment weights. Truncation or normalization improved estimation. Therefore, particular attention must be paid to the distribution of weights, and truncation or normalization should be applied if extreme weights are noticed. In the second case, bias was due to the misspecification of the treatment model. Last observation carried forward (LOCF), multiple imputation (MI), and inverse probability of missingness weighting (IPMW) were used to correct for the missingness. We found that alternatives, especially the IPMW method, perform better than the classic LOCF method. Nevertheless, in situations with high marker's variance and rarely recorded measurements none of the examined method adequately corrected the bias.  相似文献   

20.
The development of clinical prediction models requires the selection of suitable predictor variables. Techniques to perform objective Bayesian variable selection in the linear model are well developed and have been extended to the generalized linear model setting as well as to the Cox proportional hazards model. Here, we consider discrete time‐to‐event data with competing risks and propose methodology to develop a clinical prediction model for the daily risk of acquiring a ventilator‐associated pneumonia (VAP) attributed to P. aeruginosa (PA) in intensive care units. The competing events for a PA VAP are extubation, death, and VAP due to other bacteria. Baseline variables are potentially important to predict the outcome at the start of ventilation, but may lose some of their predictive power after a certain time. Therefore, we use a landmark approach for dynamic Bayesian variable selection where the set of relevant predictors depends on the time already spent at risk. We finally determine the direct impact of a variable on each competing event through cause‐specific variable selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号